Browsing by Author "Bradley, Thomas, committee member"
Now showing 1 - 20 of 44
Results Per Page
Sort Options
Item Open Access Algae-to-fuel pathways: integration of cultivation studies, process modeling, techno-economic analyses, and life cycle assessments(Colorado State University. Libraries, 2022) Chen, Peter H., author; Quinn, Jason C., advisor; Bradley, Thomas, committee member; Marchese, Anthony, committee member; Reardon, Kenneth, committee memberResearchers have recognized the potential of microalgae for renewable fuels for several decades, with a sharp increase in interest in the past decade. Though progress in algal cultivation and conversion has been substantial, commercialization of algal fuels has not yet been achieved. Economic metrics must be balanced with renewable fuel goals such that algal fuels can be competitive with conventional petroleum fuels. Through process modeling, techno-economic analysis (TEA), and life cycle assessment (LCA), the work in this dissertation seeks to illuminate improvements to algal fuel systems and outline the steps required to advance algal fuels toward commercialization. This work heavily focuses on hydrothermal liquefaction (HTL), a thermochemical process that converts whole wet biomass into biocrude, a petroleum crude oil analog. An aqueous phase, a gaseous phase, and a solid phase are created alongside the primary biocrude product. The aqueous phase of HTL notably contains a high content of nitrogen, which could potentially be recycled back to algae cultivation. At a scale where algal biofuels would meet a significant portion of transportation fuel needs, the demand for nutrients, specifically nitrogen and phosphorus, would exceed current global agricultural production. While recycling the aqueous phase could alleviate the demand for fresh nutrients in algae cultivation, it also contains toxic components, which include heterocyclic nitrogen compounds and phenolic compounds. The first phase of this research is an experimental component that focuses on methods for improving the recyclability of nutrients in the aqueous phase. A novel use of adsorbents (activated carbon and ion-exchange resins) was discovered for reducing the presence of components that are toxic to algae growth. The second research phase is a comprehensive modeling effort of the HTL process. A process model was developed in Aspen Plus from a robust assessment of current literature. These results are fed into TEA and LCA models to fully demonstrate the effects that process uncertainties have on the viability of HTL. For example, the high-temperature conditions that define HTL require the material to maintain a subcritical liquid state, which complicates the assessment of accurate thermochemical properties due to the required pressure. To clarify this issue, the work in this research phase compares the estimated performance of algal HTL between different thermodynamic models. HTL environmental metrics beyond global warming potential and net energy ratio are also discussed for the first time. Uncertainties in conversion performance are bounded through a scenario analysis that manipulates parameters such as product yield and nutrient recycle (as discussed in the first research phase) to establish a range of economic results and environmental impacts. The work is supplemented with a publicly available model to support future hydrothermal liquefaction assessments and accelerate the development of commercial-scale systems. The third and final research phase compares HTL with a fractionation train called Combined Algal Processing (CAP) and takes into consideration the possibility of integrating HTL downstream of CAP. CAP can be described as a pretreatment and fermentation step followed by a lipid extraction step to extract carbohydrates and lipids, respectively, for fuel products. However, CAP cannot convert proteins to fuels, making the process highly dependent on feed composition from the cultivation stage. HTL's advantage over CAP is its relative agnosticism to composition, but it requires greater capital costs and is more energetically intensive. A fuzzy logic approach is proposed to compare CAP and HTL process models through relevant performance metrics and to map algal feed conditions that lead to optimal algae-to-fuel pathways. Thresholds are set for fuzzy membership functions in relevant performance objectives: minimum fuel selling price (MFSP), global warming potential (GWP), and net energy ratio (NER). The membership functions yield "satisfaction scores" for each objective and factor into an overall satisfaction score. Individual and overall satisfaction scores for each pathway are mapped to the full range of feed compositions (proteins, carbohydrates, and lipids). A composition-based algal growth model was then implemented to perform an uncertainty analysis through Monte Carlo simulations. The impact on satisfaction scores from varying other key process model parameters, such as algae productivity, individual process yields, process operating parameters, and life cycle inventory uncertainty are highlighted in these select scenarios.Item Open Access An economic and environmental assessment of guayule resin co-products for a US natural rubber industry(Colorado State University. Libraries, 2023) Silagy, Brooke, author; Reardon, Kenneth, advisor; Quinn, Jason C., advisor; Kipper, Matthew, committee member; Bradley, Thomas, committee memberGuayule (Parthenium argentatum) is a natural rubber producing desert shrub that has the potential to be grown in semi-arid areas with limited water resources. Numerous studies have examined the costs and environmental impacts associated with guayule rubber production. These studies identified the need for additional value from the rubber co-products, specifically the resin, for sustainable and commercial viability of the biorefinery concept. This study developed process models for resin-based essential oils, insect repellant, and adhesive co-products that are integrated with sustainability assessments to understand the commercial viability. A techno-economic analysis and cradle-to-gate life cycle assessment (LCA) of these three different co-product pathways assumed a facility processing 66 tonnes/day of resin (derived from the processing of 1428 tonnes per day of guayule biomass) and included resin separation through co-product formation. The evaluation outcomes were integrated into an established guayule rubber production model to assess the economic potential and environmental impact of the proposed guayule resin conversion concepts. The minimum selling price for rubber varied by co-product: $3.54 per kg for essential oil, $3.40 per kg for insect repellent, and $1.69 per kg for resin blend adhesive. The resin blend adhesive co-product pathway had the lowest greenhouse gas emissions. These findings show a pathway that supports the development of a biorefining concept based on resin-based adhesives that can catalyze a US based natural rubber industry.Item Open Access Application of systems engineering principles in the analysis, modeling, and development of a DoD data processing system(Colorado State University. Libraries, 2023) Fenton, Kevin P., author; Simske, Steven J., advisor; Bradley, Thomas, committee member; Carlson, Ken, committee member; Atadero, Rebecca, committee memberIn support of over 1000 military installations worldwide, the Department of Defense (DoD) has procured contracts with thousands of vendors that supply the military with hazardous materials constituting billions of dollars of defense expenses in support of facility and asset maintenance. These materials are used for a variety of purposes ranging from weapon system maintenance to industrial and facility operations. In order to comply with environmental, health, and safety (EHS) regulations, the vendors are contractually obligated to provide Safety Data Sheets (SDSs) listing EHS concerns compliant with the requirements set forth by the United Nations Globally Harmonized System of Classification and Labeling of Chemicals (GHS). Each year chemical vendors provide over 100 thousand SDSs in a PDF or hard copy format. These SDSs are then entered manually by data stewards into the DoD centralized SDS repository – the Hazardous Material Management Information System (HMIRS). In addition, the majority of these SDS are also loaded separately by separate data stewards into downstream environmental compliance systems that support specific military branches. The association between the vendor-provided SDSs and the materials themselves was then lost until the material reaches an installation at which point personnel must select the SDS associated to the hazardous material within the service-specific hazardous material tracking system. This research applied systems engineering principles in the analysis, modeling, and development of a DoD data processing system that could be used to increase efficiency, reduce costs, and provide an automated solution not only to data entry reduction but in transitioning and modernizing the hazard communication and data transfer towards a standardized approach. Research for the processing system covered a spectrum of modern analytics and data extraction techniques including optical character recognition, artificial neural networks, and meta-algorithmic processes. Additionally, the research covered potential integration into existing DoD framework and optimization to solve many long-standing chemical management problems. While the long-term focus was for chemical manufacturers to provide SDS data in a standardized machine-encoded format, this system is designed to act as a transitionary tool to reduce manual data entry and costs of over $3 million each year while also enhancing system features to address other major obstacles in the hazard communication process. Complexities involved with the data processing of SDSs included multi-lingual translation needs, image and text recognition, periodic use of tables, and while SDSs are structured with 16 distinct sections – a general lack of standardization on how these sections were formatted. These complexities have been addressed using a patent-pending meta-algorithmic approach to produce higher data extraction yields than what an artificial neural network can produce alone while also providing SDS-specific data validation and calculation of SDS-derived data points. As the research progressed, this system functionality was communicated throughout the DoD and became part of a larger conceptual digital hazard communication transformation effort currently underway by the Office of the Secretary of Defense and the Defense Logistics Agency. This research led to five publications, a pending patent, an award for $280,000 for prototype development, and a project for the development of this system to be used as one of the potential systems in a larger DoD effort for full chemical disclosure and proactive management of not only hazardous chemicals but potentially all DoD-procured products.Item Open Access Applying model-based systems engineering in search of quality by design(Colorado State University. Libraries, 2022) Miller, Andrew R., author; Herber, Daniel R., advisor; Bradley, Thomas, committee member; Miller, Erika, committee member; Simske, Steve, committee member; Yalin, Azer P., committee memberModel-Based System Engineering (MBSE) and Model-Based Engineering (MBE) techniques have been successfully introduced into the design process of many different types of systems. The application of these techniques can be reflected in the modeling of requirements, functions, behavior, and many other aspects. The modeled design provides a digital representation of a system and the supporting development data architecture and functional requirements associated with that architecture through modeling system aspects. Various levels of the system and the corresponding data architecture fidelity can be represented within MBSE environment tools. Typically, the level of fidelity is driven by crucial systems engineering constraints such as cost, schedule, performance, and quality. Systems engineering uses many methods to develop system and data architecture to provide a representative system that meets costs within schedule with sufficient quality while maintaining the customer performance needs. The most complex and elusive constraints on systems engineering are defining system requirements focusing on quality, given a certain set of system level requirements, which is the likelihood that those requirements will be correctly and accurately found in the final system design. The focus of this research will investigate specifically the Department of Defense Architecture Framework (DoDAF) in use today to establish and then assess the relationship between the system, data architecture, and requirements in terms of Quality By Design (QbD). QbD was first coined in 1992, Quality by Design: The New Steps for Planning Quality into Goods and Services [1]. This research investigates and proposes a means to: contextualize high-level quality terms within the MBSE functional area, provide an outline for a conceptual but functional quality framework as it pertains to the MBSE DoDAF, provides tailored quality metrics with improved definitions, and then tests this improved quality framework by assessing two corresponding case studies analysis evaluations within the MBSE functional area to interrogate model architectures and assess quality of system design. Developed in the early 2000s, the Department of Defense Architecture Framework (DoDAF) is still in use today, and its system description methodologies continue to impact subsequent system description approaches [2]. Two case studies were analyzed to show proposed QbD evaluation to analyze DoDAF CONOP architecture quality. The first case study addresses the analysis of DoDAF CONOP of the National Aeronautics and Space Administration (NASA) Joint Polar Satellite System (JPSS) ground system for National Oceanic and Atmospheric Administration (NOAA) satellite system with particular focus on the Stored Mission Data (SMD) mission thread. The second case study addresses the analysis of DoDAF CONOP of the Search and Rescue (SAR) navel rescue operation network System of Systems (SoS) with particular focus on the Command and Control signaling mission thread. The case studies help to demonstrate a new DoDAF Quality Conceptual Framework (DQCF) as a means to investigate quality of DoDAF architecture in depth to include the application of DoDAF standard, the UML/SysML standards, requirement architecture instantiation, as well as modularity to understand architecture reusability and complexity. By providing a renewed focus on a quality-based systems engineering process when applying the DoDAF, improved trust in the system and data architecture of the completed models can be achieved. The results of the case study analyses reveal how a quality-focused systems engineering process can be used during development to provide a product design that better meets the customer's intent and ultimately provides the potential for the best quality product.Item Open Access Artificial neural networks for fuel consumption and emissions modeling in light duty vehicles(Colorado State University. Libraries, 2019) Chenna, Shiva Tarun, author; Jathar, Shantanu, advisor; Bradley, Thomas, committee member; Anderson, Chuck, committee memberThere is growing evidence that real world, on-road emissions from mobile sources exceed emissions determined during laboratory tests and that the air quality, climate, and human health impacts from mobile sources might be substantially different than initially thought. Hence, there is an immediate need to measure and model these exceedances if we are to better understand and mitigate the environmental impacts of mobile sources. In this work, we used a portable emissions monitoring system (PEMS) and artificial neural networks (ANNs) to measure and model on-road fuel consumption and tailpipe emissions from Tier-2 light-duty gasoline and diesel vehicle. Tests were performed on at least five separate days for each vehicle and each test included a cold start and operation over a hot phase. Routes were deliberately picked to mimic certain features (e.g., distance, time duration) of driving cycles used for emissions certification (e.g., FTP-75). Data were gathered for a total of 49 miles and 145 minutes for the gasoline vehicle and 52 miles and 165 minutes for the diesel vehicle. Fuel consumption and emissions data were calculated at 1 Hz using information gathered from the vehicle using the onboard diagnostics port and the PEMS measurements. Route-integrated tailpipe emissions did not exceed the Tier-2 emissions standard for CO, NOX, and non-methane organic gases (NMOG) for either vehicle but did exceed so for PM for the diesel vehicle. We trained ANN models on part of the data to predict fuel consumption and tailpipe emissions at 1 Hz for both vehicles and evaluated these models against the rest of the data. The ANN models performed best when the training iterations (or epochs) were set to larger than 25 and the number of neurons in the hidden layer was between 7 and 9, although we did not see any specific advantage in increasing the number of hidden layers beyond 1. The trained ANN model predicted the fuel consumption over test routes within 5.5% of the measured value for both gasoline and diesel vehicles. The ANN performance varied significantly with pollutant type for the two vehicles and we were able to develop satisfactory models only for unburned hydrocarbons (HC) and NOX for diesel vehicles. Over independent test routes, the trained ANN models predicted HC within 12.5% of the measured value for the gasoline vehicle and predicted NOX emissions within 3% of the measured values for the diesel vehicle. The ANN performed better than, and hence could be used in lieu of, multivariable regression models such as those used in mobile source emissions models (e.g., EMFAC). In an 'environmental-routing' case study performed over three origin-destination pairs, the ANNs were able to successfully pick routes that minimized fuel consumption. Our work demonstrates the use of artificial neural networks to model fuel consumption and tailpipe emissions from light-duty passenger vehicles, with applications ranging from environmental routing to emissions inventory modeling.Item Open Access Autonomous UAV control and testing methods utilizing partially observable Markov decision processes(Colorado State University. Libraries, 2018) Eaton, Christopher M., author; Chong, Edwin K. P., advisor; Maciejewski, Anthony A., advisor; Bradley, Thomas, committee member; Young, Peter, committee memberThe explosion of Unmanned Aerial Vehicles (UAVs) and the rapid development of algorithms to support autonomous flight operations of UAVs has resulted in a diverse and complex set of requirements and capabilities. This dissertation provides an approach to effectively manage these autonomous UAVs, effectively and efficiently command these vehicles through their mission, and to verify and validate that the system meets requirements. A high level system architecture is proposed for implementation on any UAV. A Partially Observable Markov Decision Process algorithm for tracking moving targets is developed for fixed field of view sensors while providing an approach for more fuel efficient operations. Finally, an approach for testing autonomous algorithms and systems is proposed to enable efficient and effective test and evaluation to support verification and validation of autonomous system requirements.Item Open Access Characterization and treatment of produced water from Wattenberg oil and gas wells fractured with slickwater and gel fluids(Colorado State University. Libraries, 2014) Sick, Bradley A., author; Carlson, Kenneth, advisor; Omur-Ozbek, Pinar, committee member; Bradley, Thomas, committee memberTreatment of produced water for reuse as a fracturing fluid is becoming an increasingly important aspect of water management surrounding the booming unconventional oil and gas industry. Understanding variation in water quality due to fracturing fluid and produced water age are fundamental to choosing an effective treatment strategy. This study involves the collection and analysis of produced water samples from three wells in the Wattenberg Field, located in northeast Colorado, over a 63-day study period (15 sampling events). One well was fractured with a cross-linked gel fluid, one with a slickwater fluid, and one with a hybrid of both fluids. Extensive water quality characterization was conducted on each sample to understand the impact of fracturing fluid type on temporal water quality trends. The greatest impact observed was that total organic carbon (TOC) concentrations were significantly higher in produced water samples from the wells fractured with the gel and hybrid fluids (943 to 1,735mg/L) compared to the well fractured with the slickwater fluid (222 to 440 mg/L). Total dissolved solids (TDS) concentrations, as well as many of the component inorganics that make up TDS, were fairly consistent among the three wells. TDS concentrations at each well increased with time from roughly 18,000 mg/L at day 1 to roughly 30,000 mg/L at day 63. Jar testing was conducted on collected samples to understand the variability in chemical coagulation/flocculation treatment due to type of fracturing fluid and well age. For the sampled wells, it was found that chemical coagulation can successfully reduce the turbidity of produced waters from wells fractured with both slickwater and gel fluids immediately after the start of production. The coagulant demand for produced waters from wells fractured with gel fluids was found to be roughly 25 to 300 % higher than that for wells fractured with slickwater fluids. The coagulant demand of produced water from each well was found to decrease with the age of the well. Additional laboratory characterization techniques were conducted on a subset of samples in order to better understand the makeup of organic compounds in produced water, including an analysis of the distribution of the volatile portion of solids, a TOC size analysis, and an analysis of organic subcategories. It was found that the majority of organic compounds in produced water samples are smaller than 0.2 µm, and that the relatively small portion that is larger than 1.5 µm contributes significantly to the predominantly volatile total suspended solids (TSS) load. Carbohydrates were found to be the largest contributor to the overall organic compound load in early produced waters from wells fractured with gel fluids; petroleum hydrocarbons were found to be the largest contributor from wells fractured with slickwater fluids. Chemical coagulation was found to reduce TOC concentrations by roughly 20%, independent of this difference in makeup.Item Open Access Comparison of design and implementation of hybrid systems in prototype vehicles(Colorado State University. Libraries, 2021) Mckenney, Benjamin, author; Quinn, Jason, advisor; Bradley, Thomas, committee member; Windom, Brett, committee memberWith the continual increased concern with vehicle emissions, the automotive industry is focused on the advancement of new technology to reduce fuel consumption and curb emissions. The Colorado State University (CSU) Vehicle Innovation Team (VIT) has recently constructed two separate vehicle prototypes that utilize state of the art automotive technology for the purpose of furthering automotive research, specifically in the area of new controls techniques. The focus of these two projects have been on the integration of hybrid powertrains into traditional combustion engine driven vehicles. The vision, scope, and overall goals of each research project vary drastically, and thus the design choices vary as well. The contents of this paper will focus on the two separate hybrid vehicle projects and seek to capture the design and integration decisions that were made and provide insight and reasoning as to why the choices were made. This process first begins with the background and scope of each project, which lays the groundwork for the design requirements that will drive each of the vehicle's overall architecture, design, function, and performance. Once these design requirements are understood, the component selection process is then examined for each vehicle. Fabrication and integration of the hybrid powertrain within the vehicle is also explored in a similar manner, in which the techniques and methodologies give an insight into the prototyping process. Throughout the sections the two different vehicle projects will be compared to one another and the differences are discussed in detail as it pertains to the design requirements of each project. Finally, the testing procedures as well as results from the hybrid systems are presented.Item Open Access Control of an 8L45 transmission inside the Colorado State University EcoCAR 3 2016 Chevrolet Camaro(Colorado State University. Libraries, 2021) Knackstedt, Clinton, author; Quinn, Jason, advisor; Bradley, Thomas, committee member; Marchese, Anthony, committee memberThe hybridization and electrification of vehicles brings new challenges to the engineering and development of automotive control systems. Parallel, single motor pre-transmission hybrid electric vehicles are a preferred design for hybrid vehicles because of the mechanical simplicity, in that the electric motor and engine are on a common axis, connected to the transmission. Mechanically, this configuration enables the electric motor to take advantage of the torque multiplication of the final drive gear and transmission. From a controls perspective, this configuration is complicated because the engine, motor and transmission must work together to achieve the system-level objectives of fuel economy and driveability. These challenges are exemplified in the development of the hybrid 2016 Chevy Camaro developed by the Colorado State University (CSU) EcoCAR 3 team. The results of this thesis demonstrate model development, model validation, and controls development to control the operation of the electric motor and engine together for driveability and performance during transmission gear changes. A model was developed in MATLAB Simulink to predict the behavior and performance of the 8-speed automatic transmission 8L45 that is stock to the 2016 Chevrolet Camaro. The performance of this model was validated by comparison to on-track vehicle data with <0.3m/s average error in prediction of the vehicle speed trace. A control system was developed to enable control of electric motor torque during shifts which eliminates ignition timing-based torque requests while maintaining driveability-derived shift dynamics. This work has implications for the design of automatic transmission hybrid electric vehicles with discussion focusing on the potential for integration of learning technologies and minimization of gear lash.Item Open Access Economic and environmental evaluation of emerging electric vehicle technologies(Colorado State University. Libraries, 2023) Horesh, Noah, author; Quinn, Jason, advisor; Bradley, Thomas, committee member; Jathar, Shantanu, committee member; Willson, Bryan, committee memberAs the transportation sector seeks to reduce costs and greenhouse gas (GHG) emissions, electric vehicles (EVs) have emerged as a promising solution. The continuous growth of the EV market necessitates the development of technologies that facilitate an economically comparable transition away from internal combustion engine vehicles (ICEVs). Moreover, it is essential to incorporate sustainability considerations across the entire value chain of EVs to ensure a sustainable future. The sustainability of EVs extends beyond their usage and includes factors such as battery production, charging infrastructure, and end-of-life management. Techno-economic analysis (TEA) and life cycle assessment (LCA) are key methodologies used to evaluate the economic and environmental components of sustainability, respectively. This dissertation work uses technological performance modeling combined with TEA and LCA methods to identify optimal deployment strategies for EV technologies. A major challenge with the electrification of transportation is the end of life of battery systems. A TEA is utilized to assess the economic viability of a novel Heterogeneous Unifying Battery (HUB) reconditioning system, which improves the performance of retired EV batteries before their 2nd life integration into grid energy storage systems (ESS). The modeling work incorporates the costs involved in the reconditioning process to determine the resale price of the batteries. Furthermore, the economic analysis is expanded to evaluate the use of HUB reconditioned batteries in a grid ESS, comparing it with an ESS assembled with new Lithium-ion (Li-ion) batteries. The minimum required revenue from each ESS is determined and compared with the estimated revenue of various grid applications to assess the market size. The findings reveal that the economical market capacity of these applications can fully meet the current supply of 2nd life EV batteries from early adopters in the United States (U.S.). However, as EV adoption expands beyond early adopters, the ESS market capacity may become saturated with the increased availability of 2nd life batteries. Despite the growing interest in EVs, their widespread adoption has been hindered, in part, by the lack of access to nearby charging infrastructure. This issue is particularly prevalent in Multi-Unit Dwellings (MUDs) where the installation of chargers can be unaffordable or unattainable for residents. To address this, TEA methodology is used to evaluate the levelized cost of charging (LCOC) for Battery Electric Vehicles (BEVs) at MUD charging hubs, aiming to identify economically viable charger deployment pathways. Specifically, multiple combinations of plug-in charger types and hub ownership models are investigated. Furthermore, the total cost of ownership (TCO) is assessed, encompassing vehicle depreciation, maintenance and repair, insurance, license and registration, and LCOC. The study also conducts a cradle to grave (C2G) LCA comparing an average passenger BEV and a gasoline conventional vehicle (CV) using geographical and temporal resolution for BEV charging. The TCO is coupled with the C2G GHG emissions to calculate the cost of GHG emissions reduction. The analysis demonstrates that MUD BEVs can reduce both costs and GHG emissions without subsidies, resulting in negative costs of GHG emissions reduction for most scenarios. However, charging at MUDs is shown to be more expensive compared to single-family homes, potentially leading to financial inequities. Additional research is required to assess the advantages of public charging systems and commercial EVs. While home charging is typically the primary option for EVs, public charging infrastructure is necessary for long-distance travel and urgent charging. This is especially important for commercial vehicles, which rely on public charging to support their operational requirements. Various charging systems have been proposed, including Direct Current Fast Charging (DCFC), Battery Swapping (BSS), and Dynamic Wireless Power Transfer (DWPT). This work includes a comparison of the TCO and global warming potential (GWP) of EVs of various sizes, specifically examining the charging systems utilized to determine precise location-specific sustainability outcomes. Nationwide infrastructure deployment simulations are conducted based on the forecasted geospatial and temporal demand for EV charging from 2031 to 2050. The TEA and LCA incorporate local fuel prices, electricity prices, electricity mixes, and traffic volumes. To account for the adaptability of variables that highly influence TCO and GWP, optimistic, baseline, and conservative scenarios are modeled for EV adoption, electricity mixes, capital costs, electricity prices, and fuel prices. The change to TCO by switching from ICEVs to EVs is shown to vary across scenarios, vehicle categories, and locations, with local parameters dramatically impacting results. Further, the EV GWP depends on local electricity mixes and infrastructure utilizations. This research highlights the dynamic nature of EV benefits and the potential for optimal outcomes through the deployment of multiple charging technologies. In conclusion, this research underscores the significance of strategically deploying EV charging infrastructure and utilizing retired EV batteries for grid energy storage. Instead of posing a challenge at end of life, these batteries are shown to be a solution for grid energy storage. The study also highlights the economic advantages of different charging infrastructure types for EVs and their role in driving EV adoption, resulting in potential GHG emissions reductions and consumer savings. Ultimately, widespread EV adoption and decarbonization of electrical grids are pivotal in achieving climate goals.Item Open Access Economic viability of multiple algal biorefining pathways and the impact of public policies(Colorado State University. Libraries, 2018) Cruce, Jesse R., author; Quinn, Jason C., advisor; Bradley, Thomas, committee member; Burkhardt, Jesse, committee memberThis study makes a holistic comparison between multiple algal biofuel pathways and examines the impact of co-products and methods assumptions on the economic viability of algal systems. Engineering process models for multiple production pathways were evaluated using techno-economic analysis (TEA). These pathways included baseline hydrothermal liquefaction (HTL), protein extraction with HTL, fractionation into high-value chemicals and fuels, and a small-scale first-of-a-kind plant coupled with a wastewater treatment facility. The impact on economic results from policy scenarios was then examined. The type of depreciation scheme was shown to be irrelevant for durations less than 9 years, while short-term subsidies were found to capture 50% of the subsidy value in 6 years, and 75% in 12 years. Carbon prices can decrease fuel costs as seen by the production facility through carbon capture credits. TEA tradeoff assessments determined that $7.3 of capital costs are equivalent to $1 yr-1 of operational costs for baseline economic assumptions. Comparison of algal fuels to corn and cellulosic ethanol demonstrates the need for significant co-product credits to offset high algal capital costs. Higher value co-products were shown to be required for algal fuel economic viability.Item Open Access Empirical evaluation of a dimension-reduction method for time-series prediction(Colorado State University. Libraries, 2020) Ghorbani, Mahsa, author; Chong, Edwin K. P., advisor; Pezeshki, Ali, committee member; Young, Peter, committee member; Bradley, Thomas, committee memberStock price prediction is one of the most challenging problems in finance. The multivariate conditional mean is a point estimator to minimize the mean square error of prediction giver past data. However, the calculation of the condition mean and covariance involves the numerical inverse of a typically ill-conditioned matrix, leading to numerical issues. To overcome this problem, we develop a method based on filtering the data using principle components. Principal component analysis (PCA) identifies a small number of principle components that explain most of the variation in a data set. This method is often used for dimensionality reduction and analysis of the data. Our method bears some similarities with subspace filtering methods. Projecting the noisy observation onto a principle subspace leads to significantly better numerical conditioning. Our method accounts for time-varying covariance information. We first introduce our method for predicting future price values over a short period of time using just historical price values. The literature provides strong evidence that stock price values can be predicted from past price data. Different economic variables have also been used in the literature to estimate stock-price values with high accuracy. To accommodate using historical data for such economic variables, we build on our method to include multiple predictors. We use multichannel cross-correlation coefficient as a measure for selecting the most correlated set of variables for each stock. Then we apply our filtering operation based on the local covariance of the data. Our method is easily implemented and can be configured to include an arbitrary number of predictors, subject to computational constraints. Time-series prediction can be posed as a matrix completion problem. Matrix completion is an important problem in many fields and has been receiving considerable attention in recent years. Different approaches and algorithms have been proposed to solve this problem. We investigate the effectiveness of an iterative rank minimizing matrix completion algorithm for predicting financial time series. As a key performance to compare different schemes, we use computational complexity, which focuses on the computational burden of these schemes. We compare the prediction results from the iterative matrix completion method to our method in terms of asymptotic and empirical computational complexity. Both methods show similar performance for forecasting future stock price values in terms of different performance metrics, but our proposed method has lower computational complexity.Item Open Access Evaluating factors that impact situation awareness and takeover responses during cyberattacks on connected and automated vehicles(Colorado State University. Libraries, 2022) Aliebrahimi, Somayeh, author; Miller, Erika, advisor; Bradley, Thomas, committee member; Batchelor, Ann, committee member; Clegg, Benjamin, committee memberAutonomous vehicles offer many potential benefits; however, this expansion of cyber-physical systems into transportation also introduces a new potential vulnerability in terms of cybersecurity threats. It is therefore important to understand the role vehicle occupants can play in preventing and responding to cyberattacks. The objectives of this study are to (1) evaluate how drivers respond to unexpected cyberattacks on automated vehicles, (2) evaluate how cybersecurity knowledge affects situation awareness (SA) during cyberattacks on automated driving, and (3) evaluate how the type of cyberattack affects a drivers' response. A driving simulator study with 20 participants was conducted to measure drivers' performance during unexpected cyberattacks on a SAE Level 2 partially-autonomous vehicle and the infrastructure in the driving environment. The scenarios were developed specifically for use in this study. Each participant experienced four driving scenarios, each scenario with a different cyberattack. Two cyberattacks were directly on the vehicle and two were on the infrastructure. Situation Awareness Global Assessment Technique (SAGAT) was used to measure participants' situation awareness during the drives and at the time of the cyberattacks. Participant takeover responses to the cyberattacks were collected through the driving simulator. Participants also completed a cybersecurity knowledge survey at the end of the experiment to assess their previous overall cyber awareness and experience with autonomous vehicles. Most of the participants noticed the cyberattacks, however only about half of the participants chose to take over control of the vehicle during the attacks, and in one attack no one overtook the automation. Results from ANOVAs showed significantly higher SA for participants with greater familiarity with cybersecurity terms and vehicle-to-everything technology. In addition, SA scores were significantly higher for participants who believed security systems (i.e., firewall, encryption) are important and for those who felt protected against cybercrimes. The present results suggest that increased cybersecurity knowledge can cause a high level of situation awareness during automated driving, which can help drivers to control unexpected driving situations due to cybersecurity attacks. Additionally, the results show that drivers are more likely to takeover control of their automated vehicle for cyberattacks that have known adverse outcomes, such as failing to stop at a stop sign or traffic signal or when their vision is obscured.Item Open Access Exploration of unique porous bone materials for candidacy in bioinspired material design(Colorado State University. Libraries, 2018) Seek, Timothy W., author; Donahue, Seth, advisor; Bradley, Thomas, committee member; Florant, Gregory, committee memberBioinspired material design draws inspiration for improved technologies from unique functional adaptations found in nature. Grizzly bear (Ursus arctos horribilis), cave bear (Ursus spelaeus), edmontosaur (Edmontosaurus annectens) (Edmontosaurusregalis), and bighorn sheep (Ovis canadensis) exhibit unique functional examples of porous bone structures. Grizzly bear trabecular bone does not lose bone density during long periods of disuse. Cave bears, being larger than grizzly bears, give a unique perspective of trabecular bone property scaling relationships in animals from the near past. Edmontosaurs were expected to have grown to gigantic sizes weighing 7936±1991 kg creating a unique high force loading environment in dinosaur trabecular bone. Bighorn sheep butt heads during the mating season routinely generating near 100g accelerations and approximately 3400N forces in their horn core bone during impact. Morphological trabecular bone properties of bone volume fraction (BV/TV), trabecular thickness (Tb.Th), trabecular separation (Tb.Sp), and trabecular number (Tb.N) were examined using micro-computed tomography (µCT) imaging for the underlying trabecular bone in the proximal tibias of grizzly bear, cave bear, and edmontosaurus animals. Morphological bone properties were compared against body mass scaling relationships from extant mammals. Cave bear trabecular bone was found to have larger BV/TV and Tb.Th than modern grizzly bears. The larger BV/TV may indicate environmental drivers on cave bear trabecular bone properties. To our knowledge, the measurement of dinosaur trabecular bone properties is a novel concept. Adult edmontosaur BV/TV was measured at an average greater than 60% which was significantly different from extant species BV/TV values. Additionally, adult edmontosaurus Tb.Th, and Tb.Sp were measured at comparable values to small mammals. The difference in edmontosaur BV/TV from extant mammals may be a potential clue in why extant terrestrial animals do not reach the same levels of gigantism as dinosaurs. Additionally, mimicking the continuum properties of edmontosaur trabecular bone in an engineered foam may have potential usage in optimized high strength foams. Bighorn sheep horn core bone exhibits observational and morphological properties different from typical trabecular bone in thickness, separation and number. Due to these differences, the bighorn sheep horn core bone is being considered as a new type of porous bone architecture referred to as 'velar' bone. The velar bone morphology indicates that it is highly adapted to resist high impact bending through widely separated and thick bone formations. Future bioinspired engineering foam designs mimicking the structures of porous bone outlined in this research could be useful for energy absorption in repeated high impact loading. The work presented here does not include efforts to create a bioinspired structural foam. However, this research focuses on the quantification of porous bone structural properties optimized for unique mechanical environments for the purposes of guiding future research towards structural foam design.Item Open Access GIS based location optimization for mobile produced water treatment facilities in shale gas operations(Colorado State University. Libraries, 2014) Kitwadkar, Amol Hanmant, author; Carlson, Kennneth H., advisor; Catton, Kimberly, advisor; Bradley, Thomas, committee memberOver 60% of the nation's total energy is supplied by oil and natural gas together and this demand for energy will continue to grow in the future (Radler et al. 2012). The growing demand is pushing the exploration and exploitation of onshore oil and natural gas reservoirs. Hydraulic fracturing has proven to not only create jobs and achieve economic growth, but also has proven to exert a lot of stress on natural resources--such as water. As water is one of the most important factors in the world of hydraulic fracturing, proper fluids management during the development of a field of operation is perhaps the key element to address a lot of these issues. Almost 30% of the water used during hydraulic fracturing comes out of the well in the form of flowback water during the first month after the well is fractured (Bai et. al. 2012). Handling this large amount of water coming out of the newly fractured wells is one of the major issues as the volume of the water after this period drops off and remains constant for a long time (Bai et. al. 2012) and permanent facilities can be constructed to take care of the water over a longer period. This paper illustrates development of a GIS based tool for optimizing the location of a mobile produced water treatment facility while development is still occurring. A methodology was developed based on a multi criteria decision analysis (MCDA) to optimize the location of the mobile treatment facilities. The criteria for MCDA include well density, ease of access (from roads considering truck hauls) and piping minimization if piping is used and water volume produced. The area of study is 72 square miles east of Greeley, CO in the Wattenberg Field in northeastern Colorado that will be developed for oil and gas production starting in the year 2014. A quarterly analysis is done so that we can observe the effect of future development plans and current circumstances on the location as we move from quarter to quarter. This will help the operators to make long-term decisions and also they can make decisions about the well pad siting and well densities. Three different scenarios--baseline, retroactive and proactive--were considered to see what could be the proper way to answer the question of optimal fluids management (OFM). Once the locations were obtained the results from different scenarios were compared for piping distances from each well going towards the facility, assuming the pipeline distance as the criteria to be minimized. The results obtained were pretty robust and observed to be fulfilling the intended purpose.Item Open Access Improving construction machine engine system durability in Latin American conditions(Colorado State University. Libraries, 2018) Azevedo, Kurt Milward, author; Olsen, Daniel, advisor; Bradley, Thomas, committee member; Grigg, Neil, committee member; Strong, Kelly, committee memberBetween 2016 and 2030, the Latin America region needs to spend $7 trillion dollars (Bridging global infrastructure gaps, 2016). Thus, for the foreseeable future, the Latin American market will experience high demand for construction equipment such as backhoes, excavators, crawler-dozers, and loaders to construct roads, housing, airports, and sea ports. Construction equipment employed in Latin America operates in conditions which are often more severe compared to developed countries such as the United States. Consequently, the durability of construction equipment diesel engines is reduced within the context of the system engineering life cycle. This results in a greater number of warranty claims, increased customer product dissatisfaction, and delays in completing contracted projects. Peer-reviewed literature lacks information regarding the wear and failure of construction equipment diesel engines operating in Latin America. Thus, the purpose of this research is to contribute to the system and maintainability engineering fields of knowledge by analyzing oil samples taken from diesel engines operating in Latin America. Oil samples are leading indicators and predictors for wear in specific components of diesel engines, as they directly connect to the use conditions of actual work environments. The methodology approach considers data points from different sources and countries. The engine oil sample analysis results are evaluated in the context of local diesel fuel quality, machine diagnostic trouble codes, and the work environments for the following countries: Bolivia, Colombia, Costa Rica, Dominican Republic, Ecuador, Guatemala, Honduras, Mexico, Paraguay, Peru, and Uruguay. The following data sources are used to answer the research questions: (1) database of oil sample laboratories of eleven countries, (2) construction equipment diagnostic trouble codes, (3) construction equipment surveys, (3) John Deere service manager's surveys, (4) two John Deere 200D excavators, (5) engine operating data, and (6) Engine Control Unit sensor data. It is determined that cross-system contamination was key contributors of oil contamination. Contamination related to environmental conditions in which the equipment was operated is also a key factor, as there is a high statistical correlation of sodium, silicone, and aluminum oil contamination present in the oil of equipment operating at higher altitudes. It is determined that sulfur, diesel fuel quality, humidity, bio-diesel, temperature, and altitude are factors that must be considered in relation to diesel engine reliability and maintenance. The research found that by correlating the engine oil sample contamination with the environment risk drivers (a) altitude and diesel fuel quality have the greatest impact on iron readings, (b) bio-diesel impacts copper, and (c) precipitation and poor diesel quality are associated with silicon levels. Wear metals present in the oil samples indicate that scheduled maintenance frequency must not exceed 250 hours for diesel engines operating in many areas of Latin America. The leading and earliest indicator of engine wear is a high level of iron particles in the engine oil, reaching abnormal levels at 218 hours. The research found that engine idling for extended periods contributes to soot accumulation.Item Open Access Integration of systems engineering and project management using a management flight simulator(Colorado State University. Libraries, 2020) Jonkers, Raymond Klaas, author; Shahroudi, Kamran Eftekhari, advisor; Young, Peter, advisor; Bradley, Thomas, committee member; Valdes-Vasquez, Rodolfo, committee memberCost overruns and schedule delays are pervasive in complex projects despite the use of systems engineering and traditional project management models and tools. These disciplines can often work in isolation leading to inconsistencies in product information, tracking of design changes and challenges in decision-making. While literature proposes philosophical approaches to integrating these disciplines, there does not appear to be a practical approach offered. The current study proposes a practical approach by way of a management flight simulator that integrates systems engineering and management models for data-driven risk-informed decision-making. This simulator provides immediate feedback on whether a change is going to help or disrupt design integrity through the monitoring of system attribute trends and cues. It also provides the impact on lifecycle management curves using a system dynamics sub-model. From this feedback, several system, policy and process levers are available within the simulator for what-if scenarios with the goal to improve product, organizational and project performance. The value in the emergent properties of the simulator as a decision support system is viewed as greater than from the sum of its sub-models. In developing the simulator, integration requirements, systems thinking, systems science and systems engineering practices are leveraged to develop an integration strategy. For bringing multiple disciplines together to address design changes risks, a response strategy is proposed that includes aspects of set-based goal-based design and agile management practices.Item Embargo Investigating the association between public health system structure and system effectiveness(Colorado State University. Libraries, 2024) Orr, Jason, author; Golicic, Susan, advisor; Bradley, Thomas, committee member; Miller, Erika, committee member; Gutilla, Molly, committee member; Magzamen, Sheryl, committee memberPublic health systems in the United States face significant challenges due to their complexity and variability. This dissertation follows a three-paper format and examines these systems through a comprehensive analysis, using systems approaches, latent transition analysis (LTA), and ordinal regression to uncover patterns and inform improvements in public health governance and service delivery. The first essay (Chapter 2) explores the application of systems approaches to the design and improvement of public health systems. A scoping review was conducted, revealing a paucity of literature on the use of "hard" systems methodologies like systems analysis and engineering in public health. The findings highlight the potential for systems approaches to enhance the efficiency, effectiveness, and equity of public health services. However, the limited engagement by public health practitioners and the lack of depth in existing literature indicate significant gaps that need to be addressed to fully leverage systems science in public health governance and service delivery. Building on the literature review, the second essay (Chapter 3) introduces a novel typology of local health departments (LHDs) using LTA based on the National Association of County and City Health Officials (NACCHO) Profile study data. The LTA identified six distinct latent statuses of LHDs, characterized by variables such as governance centrality, colocation, and integration. This typology provides a robust framework for understanding the structural and operational diversity of LHDs, offering insights into how these factors influence public health outcomes. The final essay (Chapter 4) applies ordinal regression analyses to explore the relationship between the latent statuses of LHDs and various community health outcomes. Initial analyses using a cumulative logit model indicated a violation of the proportional odds assumption, necessitating a shift to a generalized logit model. This approach revealed significant predictors of latent statuses, such as poor physical health days, preventable hospital stays, and life expectancy. The findings underscore the complexity of public health systems and the need for careful selection of statistical models to accurately capture these dynamics. The study provides actionable insights for public health policy and strategic planning, highlighting areas for future research and potential interventions to optimize public health system design and operations. This dissertation underscores the importance of systems approaches in understanding and improving public health systems. By leveraging advanced statistical models and exploring the structural characteristics of LHDs, it contributes to a deeper understanding of the factors influencing public health governance and service delivery. The findings offer a foundation for future research and policy development aimed at enhancing the efficiency and effectiveness of public health systems to better serve communities.Item Open Access Investigation of vertical mixing in raceway pond systems using computational fluid dynamics(Colorado State University. Libraries, 2021) Shen, Chen, author; Dandy, David S., advisor; Reardon, Kenneth F., committee member; Bradley, Thomas, committee member; Prasad, Ashok, committee memberRaceway ponds are widely used as cost-efficient and easily set up outdoor algal cultivation systems. Growth rates strongly depend on cumulative light exposure, which can be predicted using accurate computational fluid dynamics simulations of the ponds' dynamics. Of particular importance in computing the three-dimensional velocity field is the vertical component that is responsible for transporting cells between light and dark regions. Numerous previous studies utilized one of the turbulence models derived from the Reynolds-averaged Navier–Stokes equations to predict the turbulent behaviors in raceway ponds. Because vertical fluid motion is secondary and the primary flow is in the horizontal plane, using one of the Reynolds-averaged Navier–Stokes turbulence equations has the potential to decrease the fidelity of information about vertical motion. In Chapter 2, large eddy simulation (LES) and k-ɛ models are used to simulate fluid dynamics in a mesoscale (615 L) raceway pond system and compared with laboratory data. It is found that swirling motions present in the liquid phase play an essential role in the vertical mixing performance. LES is shown to have the capability to provide more realistic and highly time dependent hydrodynamic predictions when compared with experimental data, while the k-ε model under-predicts the magnitude of the swirling behavior and over-predicts the volume of dead zones in the pond. The instantaneous spatial distribution of high vertical velocity regions and dead zones, as well as their time-accumulated volume fraction, are investigated. LES results suggest that swirling motion exists in the low-velocity regions predicted by the k-ɛ model to be dead zones where the high-velocity flow takes place over more than 50% of the flow time, and the recirculating motion may be responsible for stratification and unwanted chemical accumulation. LES results indicate that strong vortex regions exist near the paddle wheel, and the first 180°bend, and the geometry of the divider will contribute to the generation of vortices, enhance the vertical motion, and increase the light/dark effect. In Chapter 2, it will be demonstrated that the swirling motion appears to play a critical role in enhancing the vertical mixing and enhancing the light/dark effect. In Chapter 3, a dimensional analysis is performed to predict the persistence of the swirling motion generated at the hairpin bend by modeling 7 raceway pond geometries with shape ratios—defined as the ratio of the width of a straight section to the liquid depth—ranging from 0.5 to 7.05, and Dean numbers ranging from 16,140 to 242,120. The fluid dynamics were simulated using a transient multiphase solver with a large eddy simulation turbulence model in the open-source code open Foam framework. The results demonstrate that the number of instances of swirling motion strongly depends on the shape ratio of the ponds. When the shape ratio is close to 1, a single instance of swirling motion is most likely to be found downstream of the first 180° bend, while multiple occurrences of swirling motion are observed when the shape ratio is larger than 1. It was also found that the strength of the swirling motion has a linear dependence on the average velocity magnitude downstream of the first 180° bend after the paddle wheel. The strength and persistence of the swirling motion are fit with a rational function that can be used to predict the mixing performance of a raceway pond without the need for complicated and expensive simulations. In Chapter 4, transient particle tracking is performed to predict microalgae cells' vertical motion for more than 800 s, which is subsequently converted to the cells' light intensity history. The data of light intensity history, along with the velocity field, are compared to validate the hypothesis that the cells' trajectories and L/D transition are significantly dominated by vertical mixing in raceway ponds, mostly, the swirling motions generated by the secondary flow in the hairpin bends. It is found that the region where cells have a high probability to experience light/dark transitions coincides with the spatial prediction of swirling motion, suggesting that the swirling motion significantly contributes to reducing the light/dark frequency exposure by microalgae. In Chapter 5, a novel use of vortex generators in a raceway pond is presented that passively generate swirling motion in the regions where the strength of vertical motion is predicted to otherwise be low. The flow field is quantitatively simulated using computational fluid dynamics using the large eddy simulation turbulence model. Persistence lengths of the swirling motion generated by the vortex generators indicate that significant vertical mixing can be achieved by placing vortex generators in the straight section opposite the paddle wheel, downstream of the first hairpin bend. Relatively simple vortex generators are capable of creating stronger swirling motions that persist for a longer distance than those caused by the paddle wheel. For optimal performance, vortex generators are positioned side by side but in opposite directions, and their diameters should be equal to or slightly less than the liquid depth. The optimal length of a 0.18 m diameter vortex generator in a 0.2 m deep pond was determined to be 0.3 m. Furthermore, it has been demonstrated that a longer persistence length is achieved by inducing a swirling motion with its rotational axis parallel to the primary flow direction.Item Open Access Linking system cost model to system optimization using a cost sensitivity algorithm(Colorado State University. Libraries, 2022) Polidi, Danny Israel, author; Chandrasekar, V., advisor; Borky, Mike, committee member; Bradley, Thomas, committee member; Popat, Ketul, committee memberLack of adequate cost analysis tools early in the design life cycle of a system contributes to non-optimal system design choices both in performance and cost. Modern software packages exist that perform complex physics-based simulations. Physics based simulations alone typically do not consider cost as a factor or input variable. Modern software packages exist which calculate cost and can aid in determining the cost sensitivity to a chosen design solution. It should be possible to combine the system sensitivity to cost with the system sensitivity to performance. Methods and algorithms are needed to determine which components in a system would most significantly contribute towards the impact to the overall cost and which design alternatives provide the best value to the system. These methods and algorithms are needed during concept development to aid in system scoping and cost estimation. In the bidding phase of a system design, most of the time is typically spent determining cost. System design trades are either seldomly done or abbreviated. This has not been preferable because the system design becomes locked into place long before significant trades have been performed. And the solution may not be optimal for either cost or performance. This paper reviews the research performed and includes work in creating a cost model based on a set of questions & answers to drive system design, electronic design work applicable to the specific subsystem element FLO (Frequency Locked Oscillator), development of a standardized modular diagram and Work Breakdown Structure (WBS) for a RADAR System applied to military aerospace applications in the aerospace industry, and the development of a cost sensitivity algorithm. The goal of the research and cost sensitivity algorithm was to allow the system designer the ability to optimize for both cost and performance early in the system design cycle.