Theses and Dissertations
Permanent URI for this collection
Browse
Recent Submissions
Item Open Access Integrating geometric deep learning with a set-based design approach for the exploration of graph-based engineering systems(Colorado State University. Libraries, 2024) Sirico, Anthony, Jr., author; Herber, Daniel R., advisor; Chen, Haonan, committee member; Simske, Steven, committee member; Conrad, Steven, committee memberMany complex engineering systems can be represented in a topological form, such as graphs. This dissertation introduces a framework of Graph-Set-Based Design (GSBD) that integrates graph-based techniques with Geometric Deep Learning (GDL) within a Set-Based Design (SBD) approach to address graph-centric design problems. We also introduce Iterative Classification (IC), a method for narrowing down large datasets to a subset of more promising and feasible solutions. When we combine the two, we have IC-GSBD, a methodological framework where the primary goal is to effectively and efficiently seek the best-performing solutions with lower computational costs. IC-GSBD is a method that employs an iterative approach to efficiently narrow down a graph-based dataset containing diverse design solutions to identify the most useful options. This approach is particularly valuable as the dataset would be computationally expensive to process using other conventional methods. The implementation involves analyzing a small subset of the dataset to train a machine-learning model. This model is then utilized to predict the remaining dataset iteratively, progressively refining the top solutions with each iteration. In this work, we present two case studies demonstrating this method. In the first case study utilizing IC-GSBD, the goal is the analysis of analog electrical circuits, aiming to match a specific frequency response within a particular range. Previous studies generated 43,249 unique undirected graphs representing valid potential circuits through enumeration techniques. However, determining the sizing and performance of these circuits proved computationally expensive. By using a fraction of the circuit graphs and their performance as input data for a classification-focused GDL model, we can predict the performance of the remaining graphs with favorable accuracy. The results show that incorporating additional graph-based features enhances model performance, achieving a classification accuracy of 80% using only 10% of the graphs and further subdividing the graphs into targeted groups with medians significantly closer to the best and containing 88.2 of the top 100 best-performing graphs on average using 25% of the graphs.Item Open Access Model-based systems engineering application to data management for integrated sustainable human settlement modeling(Colorado State University. Libraries, 2024) Adjahossou, Anicet, author; Grigg, Neil, advisor; Bradley, Thomas, committee member; Conrad, Steven, committee member; Willson, Bryan, committee member; Fremstad, Anders, committee memberThe challenges associated with the transition from current approaches to temporary humanitarian settlement to integrated, sustainable human settlements is largely due to a significant increase in the number of forcibly displaced people over the last few decades, the difficulties of sustainably providing the needed services to the required standards, and the prolongation of emergencies. According to the United Nations High Commissioner for Refugees (UNHCR)'s Global Appeal 2023, more than 117.2 million people were forcibly displaced or stateless in 2023, representing a little over 1% of the world's population. The average lifespan of a humanitarian settlement is between 17 and 26 years (UNHCR), and factors such as urban growth and adverse environmental changes have exacerbated the scale of the difficulties. Despite these problematical contexts, short-term considerations continue to guide the planning and management of humanitarian settlements, to the detriment of more integrated, longer-term perspectives. These factors call for a paradigm shift in approach to ensure greater sustainability right from the planning phases. Recent studies often attribute the unsustainability of humanitarian settlements to poor design and inadequate provision of basic resources and services, including water, energy, housing, employment and economic opportunities, among others. They also highlight apparent bottlenecks that hinder access to meaningful and timely data and information that stakeholders need for planning and remediation. More often than not, humanitarian operations rely on ad hoc methods, employing parallel, fragmented and disconnected data processing frameworks, resulting in the collection of a wide range of data without subsequent analysis or prioritization to optimize potential interconnections that can improve sustainability and performance. Furthermore, little effort has been made to investigate the trade-offs involved. As a result, major shortcomings emerged along the way, leading to disruption, budget overruns, disorder and more, against a backdrop of steadily declining funding for humanitarian aid. However, some attempts have been made to move towards more sustainable design approaches, but these have mainly focused on vague, sector-specific themes, ignoring systemic and integrative principles. This research is a contribution to filling these gaps by developing more practical and effective solutions, based on an integrated systemic vision of a human settlement, defined and conceptualized as a complex system. As part of this process, this research proposes a model-driven methodology, supported by Model-Based Systems Engineering (MBSE) and a Systems Modeling Language (SysML), to develop an integrated human settlement system model, which has been functionally and operationally executed using Systems Engineering (SE) approach. This novel system model enables all essential sub-systems to operate within the single system, and focuses on efficient data processing. The ultimate aim is to provide a global solution to the interconnection and integration challenges encountered in the processing of operational data and information, to ensure an effective transition to sustainable human settlements. With regard to the interconnectedness between the different sectors of the sub-systems, this research proposes a Triple Nexus Framework (TNF) in an attempt to integrate water, energy and housing sector data derived from one sub-system within the single system by applying systems engineering methods. Systems engineering, based on an understanding of the synergies between water, energy and housing, characterizes the triple nexus framework and identifies opportunities to improve decision-making steps and processes that integrate and enhance quality of data processing. To test and validate the performance of the system model, two scenarios are executed to illustrate how an integrated data platform enables easy access to meaningful data as a starting point for modeling an integrated system of sustainable human settlement in humanitarian contexts. With regard to framework performance, the model is simulated using a megadata nexus, as specified by the system requirement. The optimization simulation yields 67% satisfactory results which is further confirmed from a set of surveyed practitioners. These results show that an integrated system can improve the sustainability of human settlements beyond a sufficiently acceptable threshold, and that capacity building in service delivery is beneficial and necessary. The focus on comprehensive data processing through systems integration can be a powerful tool for overcoming gaps and challenges in humanitarian operations. Structured interviews with question analysis are conducted to validate the proposed model and framework. The results prove a consensus that the novel system model advances the state of the art in the current approach to the design and management of human settlements. An operational roadmap with substantial programmatic and technical activities required to implement the triple nexus framework is recommended for adoption and scaling-up. Finally, to assess the sustainability, adaptability and applicability of the system, the proposed system model is further validated using a context-based case study, through a capacity assessment of an existing humanitarian settlement. The sustainability analysis uses cross-impact matrix multiplication applied to classification (MICMAC) methodologies, and results show that the development of the settlement are unstable and therefore unsustainable, since there is no apparent difference between influential and dependent data. This research tackles an important global challenge, providing valuable insights towards sustainable solutions for displaced populations, aligning with the United Nations 2030 Agenda for Sustainable Development.Item Open Access Characterizing and improving the adoption rate of model-based systems engineering through an application of the Diffusion of Innovations theory(Colorado State University. Libraries, 2024) Call, Daniel R., author; Herber, Daniel R., advisor; Aloise-Young, Patricia, committee member; Conrad, Steven, committee member; Shahroudi, Kamran Eftekhari, committee memberAs the environment and operational context of new systems continue to evolve and become increasingly complex, the practice of systems engineering (SE) must adapt accordingly. A great deal of research and development has gone and continues to go into formulating and maturing a model-based approach to SE that addresses many of the shortcomings of a conventional, document-based SE approach. In spite of the work that has been done to advance the practice of model-based systems engineering (MBSE), it has not yet been adopted to a level that would be expected based on its demonstrated benefits. While research continues into even more effective MBSE approaches, there is a need to ascertain why extant MBSE innovations are not being adopted more widely, and if possible, determine a way to accelerate its adoption. This outcome is particularly important as MBSE is a key enabler to an agile systems engineering (ASE) approach that satisfies the desire of many stakeholders to apply agile principles to SE processes. The diffusion of innovations (DoI) theory provides a useful framework for understanding the factors that affect the adoption rate of innovations in many fields. This theory has not only been effective at explaining why innovations are adopted but has also been able to explain why objectively superior innovations are not adopted. The DoI theory is likely to provide insight into the factors that are depressing the adoption rate of MBSE. Despite prior efforts in the SE community to promote MBSE, the DoI theory has not been directly and deliberately applied to understand what is preventing widespread MBSE adoption. Some elements of the theory appear in the literature addressing MBSE adoption challenges without any recognition of awareness of the theory and its implications. The expectation is that harnessing the insights offered by this theory will lead to MBSE presentation and implementation strategies that will increase its use. This would allow its benefits to be more widely realized in the SE community and improve the practice of SE generally to address modern, complex environments. The DoI theory has shown that the most significant driver of adoption rate variability is the perceived attributes of the innovation in question. A survey is a useful tool to discover the perceptions of potential adopters of an innovation. The primary contribution of this research is the development of a survey to capture and assess a participant's perceptions of specified attributes of MBSE, their current use of MBSE, and some limited demographic information. This survey was widely distributed to gather data on current perceptions of MBSE in the SE community. Survey results highlighted that respondents recognize the relative advantage of MBSE in improving data quality and traceability, but perceived complexity and compatibility with existing practices still present barriers to adoption. Subpopulation analysis reveals that those who are not already involved in MBSE efforts face the additional adoption obstacles of limited trial opportunities and tool access (chi-squared test of independence between these populations resulted in p = 0.00). The survey underscores the potential for closer alignment between MBSE and existing SE methodologies to improve the perceived compatibility of MBSE. Targeted actions are proposed to address these barriers to adoption. These targeted actions include improving the availability and use of reusable model elements to expedite system model development, improved tailoring of MBSE approaches to better suit organizational needs, an increased emphasis on ASE, refining MBSE approaches to reduce the perceived mental effort required, a lowering of the barrier to entry for MBSE by improving access to the resources (tool, time, and training) required to experiment with MBSE, and increased efforts to identify and execute relevant MBSE pilot projects. The lessons and principles from the DoI theory should be applied to take advantage of the opportunity afforded by the release of SysML v2 to reframe perceptions of MBSE. Future studies would benefit from examining additional variables identified by the DoI theory, incorporating control questions to differentiate between perceptions of SE generally and MBSE specifically, identifying better methods to assess current MBSE use by participants, and measures to broaden the participant scope.Item Embargo Investigating the association between public health system structure and system effectiveness(Colorado State University. Libraries, 2024) Orr, Jason, author; Golicic, Susan, advisor; Bradley, Thomas, committee member; Miller, Erika, committee member; Gutilla, Molly, committee member; Magzamen, Sheryl, committee memberPublic health systems in the United States face significant challenges due to their complexity and variability. This dissertation follows a three-paper format and examines these systems through a comprehensive analysis, using systems approaches, latent transition analysis (LTA), and ordinal regression to uncover patterns and inform improvements in public health governance and service delivery. The first essay (Chapter 2) explores the application of systems approaches to the design and improvement of public health systems. A scoping review was conducted, revealing a paucity of literature on the use of "hard" systems methodologies like systems analysis and engineering in public health. The findings highlight the potential for systems approaches to enhance the efficiency, effectiveness, and equity of public health services. However, the limited engagement by public health practitioners and the lack of depth in existing literature indicate significant gaps that need to be addressed to fully leverage systems science in public health governance and service delivery. Building on the literature review, the second essay (Chapter 3) introduces a novel typology of local health departments (LHDs) using LTA based on the National Association of County and City Health Officials (NACCHO) Profile study data. The LTA identified six distinct latent statuses of LHDs, characterized by variables such as governance centrality, colocation, and integration. This typology provides a robust framework for understanding the structural and operational diversity of LHDs, offering insights into how these factors influence public health outcomes. The final essay (Chapter 4) applies ordinal regression analyses to explore the relationship between the latent statuses of LHDs and various community health outcomes. Initial analyses using a cumulative logit model indicated a violation of the proportional odds assumption, necessitating a shift to a generalized logit model. This approach revealed significant predictors of latent statuses, such as poor physical health days, preventable hospital stays, and life expectancy. The findings underscore the complexity of public health systems and the need for careful selection of statistical models to accurately capture these dynamics. The study provides actionable insights for public health policy and strategic planning, highlighting areas for future research and potential interventions to optimize public health system design and operations. This dissertation underscores the importance of systems approaches in understanding and improving public health systems. By leveraging advanced statistical models and exploring the structural characteristics of LHDs, it contributes to a deeper understanding of the factors influencing public health governance and service delivery. The findings offer a foundation for future research and policy development aimed at enhancing the efficiency and effectiveness of public health systems to better serve communities.Item Open Access On the integration of materials characterization into the product development lifecycle(Colorado State University. Libraries, 2024) Dare, Matthew S., author; Simske, Steve, advisor; Yourdkhani, Mostafa, committee member; Herber, Daniel, committee member; Radford, Donald W., committee memberThe document is broken down into four sections whereby a more complete integration of materials characterization into the product development lifecycle, when compared to traditional approaches, is researched and considered. The driving purpose behind this research is to demonstrate that an application of systems engineering principles to the characterization sciences mechanism within materials engineering and development will produce a more efficient and comprehensive understanding of complex material systems. This will allow for the mitigation of risk, enhancement of relevant data, and planning of characterization procedures proactively. The first section proposes a methodology for Characterization Systems Engineering (CSE) as an aid in the development life cycle of complex, material systems by combining activities traditionally associated with materials characterization, quality engineering, and systems engineering into an effective hybrid approach. The proposed benefits of CSE include shortened product development phases, faster and more complete problem solving throughout the full system life cycle, and a more adequate mechanism for integrating and accommodating novel materials into already complex systems. CSE also provides a platform for the organization and prioritization of comprehensive testing and targeted test planning strategies. Opportunities to further develop and apply the methodology are discussed. The second section focuses on the need for and design of a characterizability system attribute to assist in the development of systems that involve material components. While materials characterization efforts are typically treated as an afterthought during project planning, the argument is made here that leveraging the data generated via complete characterization efforts can enhance manufacturability, seed research efforts and intellectual property for next-generation projects, and generate more realistic and representative models. A characterizability metric is evaluated against a test scenario, within the domain of electromagnetic interference shielding, to demonstrate the utility and distinction of this system attribute. Follow-on research steps to improve the depth of the attribute application are proposed. In the third section, a test and evaluation planning protocol is developed with the specific intention of increasing the effectiveness of materials characterization within the system development lifecycle. Materials characterization is frequently not accounted for in the test planning phases of system developments, and a more proactive approach to streamlined verification and validation activities can be applied. By applying test engineering methods to materials characterization, systems engineers can produce more complete datasets and more adequately execute testing cycles. A process workflow is introduced to manage the complexity inherent to material systems development and their associated characterization sciences objectives. An example using queuing theory is used to demonstrate the potential efficacy of the technique. Topics for further test and evaluation planning for materials engineering applications are discussed. In the fourth section, a workflow is proposed to more appropriately address the risk generated by materials characterization activities within the development of complex material systems when compared to conventional engineering approaches. Quality engineering, risk mitigation efforts, and emergency response protocols are discussed with the intention of reshaping post-development phase activities to address in-service material failures. While root cause investigations are a critical component to stewardship of the full system lifecycle during a product's development, deployment and operation, a more tailored and proactive response to system defects and failures is required to meet the increasingly stringent technical performance requirements associated with modern, material-intensive systems. The analysis includes a Bayesian approach to risk assessment of materials characterization efforts through which uncertainty regarding scheduling and cost can be quantified.Item Open Access Framework for optimizing survivability in complex systems(Colorado State University. Libraries, 2024) Younes, Megan Elizabeth, author; Cale, James, advisor; Gallegos, Erika, committee member; Simske, Steve, committee member; Gaofeng, Jia, committee memberIncreasing high probability low frequency events such as extreme weather incidents in combination with aging infrastructure in the United States puts the nation's critical infrastructure such as hydroelectric dams' survivability at risk. Maximizing resiliency in complex systems can be viewed as a multi-objective optimization that includes system performance, survivability, economic and social factors. Systems requiring high survivability: a hydroelectric dam, typically require one or more redundant (standby) subsystems, which increases system cost. To optimize the tradeoffs between system survivability and cost, this research introduces an approach for obtaining the Pareto-optimal set of design candidates ("resilience frontier"). The method combines Monte Carlo (MC) sampling to estimate total survivability and a genetic algorithm (GA), referred to as the MCGA, to obtain the resilience frontier. The MCGA is applied to a hydroelectric dam to maximize overall system survivability. The MCGA is demonstrated through several numerical case studies. The results of the case studies indicate that the MCGA approach shows promise as a tool for evaluating survivability versus cost tradeoffs and also as a potential design tool for choosing system configuration and components to maximize overall system resiliency.Item Open Access The dual lens of sustainability: economic and environmental insights into novel carbon reduction technologies using systems modeling, data science, and multi-objective optimization(Colorado State University. Libraries, 2024) Limb, Braden Jeffery, author; Quinn, Jason C., advisor; Simske, Steven J., advisor; Gallegos, Erika E., committee member; Ross, Matthew R. V., committee memberIn an era marked by escalating climate change and increasing energy demands, the pursuit of sustainable solutions in energy production and environmental management is more critical than ever. This dissertation delves into this challenge, focusing on innovative technologies aimed at reducing carbon emissions in key sectors: power generation, wastewater treatment, and aviation. The first segment of the dissertation explores the integration of thermal energy storage with natural gas power plants using carbon capture, a crucial advancement given the dominant role of fossil fuel-based power plants in electricity generation. Addressing the economic and operational drawbacks of current carbon capture and storage (CCS) technologies, this study evaluates various thermal storage configurations. It seeks to enhance plant performance through energy arbitrage, a novel approach to offset the large heat loads required for carbon capture solvent regeneration. By optimizing these technologies for current and future grid pricing and comparing their feasibility with other production methods, this research aims to strike a balance between maintaining reliable power generation and adhering to stringent environmental targets. Results show that resistively charged thermal storage can both increase CCS flexibility and power plant profits through energy arbitrage when compared to power plants with CCS but without thermal storage. Beyond electrical systems, addressing climate change also necessitates improving the energy efficiency of water treatment technologies. Therefore, the dissertation investigates the potential of nature-based solutions as sustainable alternatives to traditional water treatment methods in the second section. This section probes into the efficacy of green technologies, such as constructed wetlands, in reducing costs and emissions compared to conventional gray infrastructure. By quantifying the impact of these technologies across the U.S. and evaluating the role of carbon financing, the research highlights a pathway towards more environmentally friendly and economically viable water treatment processes. Results show that nature-based water treatment technologies can treat up to 37% of future nutrient loading while both decreasing water treatment costs and emissions compared to traditional water treatment techniques. The transportation sector will play a key role in addressing climate change as it is the largest contributor to greenhouse gas emissions. While most of the transportation sector is expected to transition to electric vehicles to decrease its carbon footprint, aviation remains hard to decarbonize as electric passenger aviation is expected to be range limited. Therefore, the final segment of the dissertation addresses the challenge of meeting the U.S. Department of Energy's Sustainable Aviation Fuel (SAF) goals. It involves a comprehensive analysis of various bioenergy feedstocks for SAF production, using GIS modeling to assess their economic and environmental impacts across diverse land types. The study employs multi-objective optimization to strategize the deployment of these feedstocks, considering factors like minimum fuel selling price, greenhouse gas emissions, and breakeven carbon price. Furthermore, agent-based modeling is used to identify policy incentives that could encourage farmer adoption of bioenergy crops, a critical step towards meeting the SAF Grand Challenge goals. This dissertation offers a comprehensive analysis of novel carbon reduction technologies, emphasizing both economic viability and environmental sustainability. By developing integrated models across key sectors affected by climate change, it explores the benefits and trade-offs of various sustainability strategies. Incorporating geospatial and temporal dimensions, the research uses multi-objective optimization and systems thinking to provide targeted investment strategies for the greatest impact. The results provide important insights and actionable plans for policymakers and industry leaders, contributing to a sustainable and low-carbon future in essential areas of the global economy.Item Open Access Quality control of front-end planning for electric power construction: a collaborative process-based approach using systems engineering(Colorado State University. Libraries, 2024) Nguyen, Frank Bao Thai, author; Grigg, Neil, advisor; Valdes-Vasquez, Rodolfo, advisor; Gallegos, Erika, committee member; Glick, Scott, committee memberControlling construction costs in the electric power industry will become more important as the nation responds to new energy demands due to the transition from gasoline to electric vehicles and to emerging trends such as artificial intelligence and use of cryptocurrency. However, managing electric utility construction project costs requires that the risk of field change orders (FCOs) during construction be controlled. In the electric power industry, utility companies face increasing risk from FCOs, due to conversion from overhead to underground systems required by security and climate change factors, and subgrade work is more challenging and less predictable than the more visible overhead work. Change orders cause cost overruns and schedule slippages and can occur for reasons such as changes in scope of work, unforeseen jobsite conditions, modifications of plans to meet existing field conditions, and correction of work required by field inspectors to meet safety standards. The best opportunity to control FCOs comes during front-end planning (FEP) when conditions leading to them can be identified and mitigated. This study utilized systems engineering methodologies to address risk of FCOs in three phases: (1) defining the root causes and identifying severities of FCOs, (2) evaluating stakeholder responsibilities to find and mitigate root causes of FCOs, and (3) developing a process to identify and find solutions for the risk of FCOs. The first phase involved using a descriptive statistical analysis of the project database of an electric utility company to identify and analyze the magnitude, frequency, and causes of FCOs in overhead and underground electrical construction. The results showed that FCOs with added scopes occurred more frequently in underground projects than in overhead projects. The analysis also indicated that most causes of FCOs could be managed during the FEP process, and it laid a foundation for the next phase, to promote collaboration among stakeholders to allocate responsibility to identify and mitigate risk of FCOs. In the second phase, the study used Analytical Hierarchy Process methodologies to distribute weights of stakeholder votes to create an integrated metric of front-end planning team confidence that a desired level of quality had been achieved. This study was significant in that it showed how effectiveness of collaborative working relationships across teams during front-end planning could be improved to create a quality control metric to capture risk of FCOs. In the third phase, the study used results from the first two phases and additional tools based on Swimlane diagrams and logical relationships between tasks and stakeholders to formulate a quality control roadmap model. This model is significant because it creates a roadmap to enhance the effectiveness of interdisciplinary teamwork through a critical path of the FEP process. The roadmap model shows a streamlined process for decision-making in each phase of front-end planning to minimize risk of FCOs through a logical path prior to final design. While there have been efforts to improve the design process, this study is the first one known to the researcher to address quality control of FEP using a roadmap process for quality control in electric power construction projects. The primary contribution is to enrich the body of knowledge about quality control of FEP by creating a roadmap model based on systems engineering and enhancing the effectiveness of collaborative working relationships in a logical process that captures risk of FCOs early in the FEP process. Besides the contribution of a method to reduce the risk of FCOs, the study points to another important concern to the construction industry about safety on the jobsite. The contractor normally requires a time extension to complete the work due to an FCO, but to reduce the impact to the project schedule, overtime is normally provided to the construction workers to perform the task. Additional research on this issue is required, but it is apparent that due to the fatigue of long working hours, this overtime may impact the task performance as well as the physical and psychological well-being of the construction workers, and they may lose safety awareness and have higher risk of accidents on the construction site. Thus, reducing the risk of FCOs will lead to less overtime and is an effective way for the construction project team to reduce the risk of construction accidents.Item Open Access Novel assessments of country pandemic vulnerability based on non-pandemic predictors, pandemic predictors, and country primary and secondary vaccination inflection points(Colorado State University. Libraries, 2024) Vlajnic, Marco M., author; Simske, Steven, advisor; Cale, James, committee member; Conrad, Steven, committee member; Reisfeld, Bradley, committee memberThe devastating worldwide impact of the COVID-19 pandemic created a need to better understand the predictors of pandemic vulnerability and the effects of vaccination on case fatality rates in a pandemic setting at a country level. The non-pandemic predictors were assessed relative to COVID-19 case fatality rates in 26 countries and grouped into two novel public health indices. The predictors were analyzed and ranked utilizing machine learning methodologies (Random Forest Regressor and Extreme Gradient Boosting models, both with distribution lags, and a novel K-means-Coefficient of Variance sensitivity analysis approach and Ordinary Least Squares Multifactor Regression). Foundational time series forecasting models (ARIMA, Prophet, LSTM) and novel hybrid models (SARIMA-Bidirectional LSTM and SARIMA-Prophet-Bidirectional LSTM) were compared to determine the best performing and accurate model to forecast vaccination inflection points. XGBoost methodology demonstrated higher sensitivity and accuracy across all performance metrics relative to RFR, proving that cardiovascular death rate was the most dominant predictive feature for 46% of countries (Population Health Index), and hospital beds per thousand people for 46% of countries (Country Health Index). The novel K-means-COV sensitivity analysis approach performed with high accuracy and was successfully validated across all three methods, demonstrating that female smokers was the most common predictive feature across different analysis sets. The new model was also validated with the Calinski-Harabasz methodology. Every machine learning technique that was evaluated showed great predictive value and high accuracy. At a vaccination rate of 13.1%, the primary vaccination inflection point was achieved at 83.27 days. The secondary vaccination inflection point was reached at 339.31 days at the cumulative vaccination rate of 67.8%. All assessed machine and deep learning methodologies performed with high accuracy relative to COVID-19 historical data, demonstrated strong forecasting value, and were validated by anomaly and volatility detection analyses. The novel triple hybrid model performed the best and had the highest accuracy across all performance metrics. To be better prepared for future pandemics, countries should utilize sophisticated machine and deep learning methodologies and prioritize the health of elderly, frail and patients with comorbidities.Item Open Access Time-delta method for measuring software development contribution rates(Colorado State University. Libraries, 2024) Bishop, Vincil Chapman, III, author; Simske, Steven J., advisor; Vans, Marie, committee member; Malaiya, Yashwant, committee member; Ray, Indrajit, committee memberThe Time-Delta Method for estimating software development contribution rates provides insight into the efficiency and effectiveness of software developers. It proposes and evaluates a framework for assessing software development contribution and its rate (first derivative) based on Commit Time Delta (CTD) and software complexity metrics. The methodology relies on analyzing historical data from software repositories, employing statistical techniques to infer developer productivity and work patterns. The approach combines existing metrics like Cyclomatic Complexity with novel imputation techniques to estimate unobserved work durations, offering a practical tool for evaluating the engagement of software developers in a production setting. The findings suggest that this method can serve as a reliable estimator of development effort, with potential implications for optimizing software project management and resource allocation.Item Open Access Modeling energy systems using large data sets(Colorado State University. Libraries, 2024) Duggan, Gerald P., author; Young, Peter, advisor; Zimmerle, Daniel, advisor; Bradley, Thomas, committee member; Carter, Ellison, committee memberModeling and simulation are playing an increasingly import role in the sciences, and science is having a broader impact on policy definition at a local, national, and global scale. It is therefore important that simulations which impact policy produce high-quality results. The veracity of these models depend on many factors, including the quality of input data, the verification process for the simulations, and how result data are transformed into conclusions. Input data often comes from multiple sources and it is difficult to create a single, verified data set. This dissertation describes the challenges in creating a research-quality, verified and aggregated data set. It then offers solutions to these challenges, then illustrates the process using three case studies of published modeling and simulation results from different application domains.Item Open Access Managing risk in commercial-off-the-shelf based space hardware systems(Colorado State University. Libraries, 2024) Herbert, Eric W., author; Bradley, Thomas, advisor; Sega, Ronald, advisor; Herber, Daniel, committee member; Shahroudi, Kamran, committee member; Wise, Daniel, committee memberThe space industry is experiencing a dynamic renaissance. From 2005 to 2021, the industry has exhibited a 265% increase in commercial and government investment [1]. The demand is forecasted to continue its upward trajectory by an added 55% by 2026 [1]. So, the aerospace industry continually seeks innovative space hardware solutions to reduce cost and to shorten orbit insertion schedules. Using Commercial-Off-the-Shelf (COTS) components to build space-grade hardware is one method that has been proposed to meet these goals. However, using non-space-grade COTS components requires designers to identify and manage risks differently early in the development stages. Once the risks are identified, then sound and robust risk management efforts can be applied. The methods used must verify that the COTS are reliable, resilient, safe, and able to survive rigorous and damaging launch and space environments for the mission's required longevity or that appropriate mitigation measures can be taken. This type of risk management practice must take into consideration form-fit-function requirements, mission objectives, size-weight-and-performance (SWaP) constraints, how the COTS will perform outside of its native applications, manufacturing variability, and lifetime expectations, albeit using a different lens than those traditionally used. To address these uncertainties associated with COTS the space industry can employ a variety of techniques like performing in-depth component selections, optimizing designs, instituting robust stress screening, incorporating protective and preventative measures, or subjecting the hardware to various forms of testing to characterize the hardware's capabilities and limitations. However, industrial accepted guidance to accomplish this does not reside in any standard or guide despite space program policies encouraging COTS use. One reason is because companies do not wish to reveal their proprietary methods used to evaluate COTS which, if broadcast, could benefit their market competition. Another is that high value spacecraft sponsors still cling to low-risk time consuming and expensive techniques that require the use of space hardware built with parts that have historical performance pedigrees. Keeping this data hidden does not help the space industry, especially when there is a push to field space systems that are built with modern technologies at a faster rate. This is causing a change in basic assumptions as stakeholders begin to embrace using parts from other industries such as the automotive, aviation, medical, and the like on a more frequent basis. No longer are COTS relegated to use in CubeSats or research and development spacecraft that have singular and limited missions that are expected to function for a brief period. This is because COTS that are produced for terrestrial markets are equally as dependable because of the optimized manufacturing and quality control techniques that reduce product variability. This increases the use of COTS parts in space hardware designs where until recently space programs had dared not to tread. But using COTS does come with a unique set of uncertainties and risks that still need to be identified and mitigated. Despite legacy risk management tools being mature and regularly practiced across a diverse industrial field, there is not a consensus on which risk management tools are best to use when evaluating COTS for space hardware applications. However, contained within technical literature amassed over the last twenty-plus years there exists significant systems engineering controls and enablers that can be used to develop robust COTS-use risk management frameworks. The controls and enablers become the basis to identify where aleatory and epistemic uncertainties exist within a COTS-based space system hardware design. With these statements in mind, unique activities can be defined to analyze, evaluate, and mitigate the uncertainties and the inherent risks to an acceptable level or to determine if a COTS-based design is not appropriate. These concepts were explored and developed in this research. Specifically, a series of COTS centric risk management frameworks were developed that can be used as a roadmap when considering integrating COTS into space hardware designs. From these frameworks unique risk evaluation processes were developed that identified the unique activities needed to effectively evaluate the non-space grade parts being considered. The activities defined in these risk evaluation processes were tailored to uncover as much uncertainty as possible so that appropriate risk mitigation techniques could be applied, design decisions could be quickly made from an informed perspective, and spacecraft fielding could be accomplished at an accelerated rate. Instead of taking five to ten years to field a spacecraft, it can now take less than one to three years. Thus, if effectively used, COTS integration can be a force multiplier throughout the space industry. But first, the best practices learned over the last few decades must be collected, synthesized, documented, and applied. To validate the risk frameworks discussed, a COTS-based space-grade secondary lithium-ion battery was chosen to demonstrate that the concepts could work. Unique risk evaluation activities were developed that took into consideration the spacecraft's mission, environment, application, and lifetime (MEAL) [2] attributes to characterize the battery's COTS cells, printed circuit board, electrical design, and electrical-electronic-electromechanical (EEE) performance, strengths, and weaknesses. The activities defined and executed included risk evaluation activities that included a variety of modeling, analyses, non-destructive examinations, destructive physical assessments, environmental testing, worst case scenario testing, and manufacturing assessments. These activities were developed based on the enablers and controls extracted from the data that was resident in the literature that was reviewed. The techniques employed proved quite successful in uncovering and mitigating numerous aleatory and epistemic uncertainties. The mitigation of these uncertainties significantly improved the battery's design and improved the battery's performance. As a result, the COTS-based battery was successfully built, qualified, and flown on a fleet of launch vehicles and payloads. The information that follows documents how the risk management frameworks were created, what influenced its architecture, and how these were successfully validated. Validating the COTS centric risk management framework was important because it demonstrated the risk management frameworks' utility to uncover uncertainty. It also proved that methods exist that can be readily employed that are not typically within the scope of traditional space hardware design and qualification techniques. This is important because it provides the industry a new set of systems engineering tools that can be employed to limit the impact of supply chain constraints, reduce reliance on expensive low-yield hardware procurement practices, and minimize the amount of obsolete hardware in designs which tend to constrain the space system hardware's performance. As a result, the techniques developed in this research start to fill a gap that exists in the space industry's systems engineering toolbox.Item Open Access Structural health monitoring in adhesively bonded composite joints(Colorado State University. Libraries, 2024) Caldwell, Steven, author; Radford, Donald W., advisor; Simske, Steven, committee member; Cale, James, committee member; Adams, Henry, committee memberComposite bonded aircraft structure is a prevalent portion of today's aircraft structural composition. Adequate bond integrity is a critical aspect of the fabrication and operational service life of aircraft structure. Many of these structural bonds are critical for flight safety. Thus, a major concern is related to the assurance of quality in the structural bond. Over the last decade, non-destructive bond evaluation techniques have improved but still cannot detect a structurally weak bond that exhibits full adherend/adhesive contact. Currently, expensive, and time-consuming structural proof testing is required to verify bond integrity. The objective of this work is to investigate the feasibility of bondline integrity monitoring via piezoelectric sensors embedded in the composite joint. Initially, a complex composite joint, the Pi preform, was analytically evaluated for health monitoring viability, with the results showing promising capability. Subsequently, due to experimental complexities, a simple, state-of-the-art composite single lap shear joint is selected for experimentation and analysis to measure and quantify the effect of incorporating a sensor within the bondline to evaluate and expand on the ability of the embedded sensor to monitor and assess the joint integrity. Simple flatwise tension joints are also studied to investigate an orthogonal loading direction through the sensor. The experimental results indicate that the embedded piezoelectric sensors can measure a change in the joint before the integrity degrades and fails on subsequent loadings, resulting in a novel approach for prognostic performance evaluation without detrimentally affecting the performance of the structural joint.Item Open Access Raw material optimization and CO₂ sensitivity-predictive analytics in cement manufacturing: a case study at Union Bridge Plant, Heidelberg Materials, Maryland(Colorado State University. Libraries, 2024) Boakye, Kwaku, author; Simske, Steve, advisor; Bradley, Tom, committee member; Troxell, Wade, committee member; Goemans, Chris, committee memberCement has been in use by humans throughout history, and its manufacturing process has undergone many changes. The high increase in economic growth around the world and the demand for rapid infrastructure development due to population growth are the underlying reasons for the globally high cement demand. Cement is produced by grinding clinker together with a mixture of ground gypsum. The clinker is produced using a rotary kiln which burns a mixture of limestone, clay, magnesium, silica, and iron with desired atomic percentages through the calcination process. The quarry serves as the main source of raw material for the rotary kiln in cement production. Over the years cement manufacturing has hurt environmental, social, and political aspects of society. This negative impact includes the overuse of raw material which is obtained by mining resulting in disturbed landmass, overproduction of rock waste material, and the emission of CO2 resulting from the calcination of limestone in the pyro process. The study looks at three cement manufacturing systems and uses different methodologies to achieve results that can be implemented in the cement industry. These three systems were (1) the quarry (2) the preheat tower and (3) the kiln. Ensuring the consistency of material feed chemistry, with the quarry playing a pivotal role, is essential for optimizing the performance of a rotary kiln. The optimization of the raw material also allows limited use of raw materials for cement manufacturing, cutting down waste. The study employed a six-step methodology, incorporating a modified 3D mining software modeling tool, a database computer loop prediction tool, and other resources to enhance mining sequencing, optimize raw material utilization, and ensure a consistent chemistry mix for the kiln. By using overburden as a raw material in the mix, the quarry nearly universally reduced the environmental impact of squandering unwanted material in the quarry. This has a significant environmental impact since it requires less space to manage the overburdened waste generated during mining. In addition, raw material usage was optimized for clinker production, causing a reduction of 4% in sand usage as raw material, a reduction in raw material purchase cost, a reduction of the variability of kiln feed chemistry, and the production of high-quality clinker. The standard deviation of kiln feed LSF experienced a 45 percent improvement, leading to a 65 percent reduction in the variability of kiln feed. The study also uses machine learning methods to model different stages of the calcination process in cement and to improve knowledge of the generation of CO2 during cement manufacturing. Calcination plays a crucial role in assessing clinker quality, energy requirements, and CO2 emissions within a cement-producing facility. However, due to the complexity of the calcination process, accurately predicting CO2 emissions has historically been challenging. The objective of this study is to establish a direct relationship between CO2 generation during the raw material manufacturing process and various process factors. In this paper, six machine-learning techniques are explored to analyze two output variables: (1) the apparent degree of oxidation, and (2) the apparent degree of calcination. Sensitivity analysis of CO2 molecular composition (on a dry basis) utilizes over 6000 historical manufacturing health data points as input variables, and the findings are utilized to train the algorithms. The Root Mean Squared Error (RMSE) of various regression models was examined, and the models were then run to ascertain which independent variables in cement manufacturing had the largest impact on the dependent variables. To establish which independent variable had the biggest impact on CO2 emissions, the significance of the other factors was also assessed.Item Open Access Autonomous trucks as a scalable system of systems: development, constituent systems communication protocols and cybersecurity(Colorado State University. Libraries, 2024) Elhadeedy, Ahmed, author; Daily, Jeremy, advisor; Chong, Edwin, committee member; Papadopoulos, Christos, committee member; Luo, Jie, committee memberDriverless vehicles are complex to develop due to the number of systems required for safe and secure autonomous operation. Autonomous vehicles embody the definition of a system of systems as they incorporate several systems to enable functions like perception, decision-making, vehicle controls, and external communication. Constituent systems are often developed by different vendors globally which introduces challenges during the development process. Additionally, as the fleet of autonomous vehicles scales, optimization of onboard and off-board communication between the constituent systems becomes critical. Autonomous truck and trailer configurations face challenges when operating in reverse due to the lack of sensing on the trailer. It is anticipated that sensor packages will be installed on existing trailers to extend autonomous operations while operating in reverse in uncontrolled environments, like a customer's loading dock. Power Line Communication (PLC) between the trailer and the tractor cannot support high bandwidth and low latency communication. Legacy communications use powerline carrier communications at 9600 baud, so upfitting existing trailers for autonomous operations will require adopting technologies like Ethernet or a wireless harness between the truck and the trailer. This would require additional security measures and architecture, especially when pairing a tractor with a trailer. We proposed tailoring the system of systems Model for autonomous vehicles. The model serves as the governing framework for the development of constituent systems. It's essential for the SoS model to accommodate various development approaches that are used for hardware, and software such as Agile, or Vee models. Additionally, a queuing model for certificates authentication compares the named certificate approach with the traditional approach. The model shows the potential benefits of named certificates when the autonomous vehicles are scaled. We also proposed using named J1939 signals to reduce complexities and integration efforts when multiple on-board or off-board systems request vehicle signals. We discuss the current challenges and threats on autonomous truck-trailer communication when Ethernet or a wireless harness is used, and the impact on the Electronic Control Unit (ECU) lifecycle. In addition to using Named Data Networking (NDN) to secure in-vehicle and cloud communication. Named Data Networking can reduce the complexity of the security of the in-vehicle communication networks where it provides a networking solution with security by design.Item Open Access Optimization of water infrastructure design and management in multi-use river basins under a changing climate(Colorado State University. Libraries, 2024) Hunu, Kenneth D., author; Conrad, Steven, advisor; DePue, Michael, committee member; Grigg, Neil, committee member; Bradley, Thomas, committee member; Sharvelle, Sybil, committee memberTraditional approaches to the hydrologic design of water infrastructure assume that the climate is stationary, and that historical data reflect future conditions. The earth's climate is not stationary but changing with time. The traditional approach may, therefore, no longer be applicable. In addition to the issue of nonstationarity of climate, the design of water infrastructure to meet a particular need, such as water supply, is often assumed to be a single-objective optimization problem and is done without consideration of other competing watershed uses and constraints such as recreation, hydropower generation, environmental flows, and flood control. Such an approach routinely fails to adequately address the challenges of complex systems such as multi-use river basins that require an understanding of the linkages between the various uses and stakeholders. Water infrastructure design will benefit from a holistic and systems engineering approach that maximizes the value to all users while serving its primary function. The objective of this research was to identify and develop a new approach for designing and managing water infrastructure in multi-use basins that accounts for the effects of climate change by shifting the current static design paradigm to a more dynamic paradigm and accounts for other multi-use basin objectives, which may include recreation, hydropower generation, flood control, environmental flows, and water supply. This research involved an extensive literature review, exploration of concepts to solve the identified problems, data collection, and development of a decision support research tool that is formulated such that it can be used to test the viability of various hypotheses. This dissertation presents a practical approach for designing and managing water infrastructure that uses quantifiable hydrological estimates of the future climate and accounts for multiple river basin objectives from stakeholders. The approach is a hybrid approach that applies the updated flood frequency methodology for accounting for climate change and an adaptive management framework for managing uncertainty and multiple basin objectives. The adaptive management framework defines and maintains baseline objectives of existing climate stressors and basin users while designing the primary water infrastructure, in a manner that accounts for nonstationarity and uncertainty. The adaptive management approach allows for regular review and refinement of the application of climate data and adjustments to basin objectives, thereby reducing uncertainty within the data needed for decision-making. This new approach provides a cost-effective way to use climate change projections, is applicable to all basins and projects irrespective of geographic location, size, or basin uses, and has minimal subjective components thereby making it reproducible.Item Open Access Hybrid MBSE-DevOps model for implementation in very small enterprises(Colorado State University. Libraries, 2024) Simpson, Cailin R., author; Simske, Steven, advisor; Miller, Erika, committee member; Reisfeld, Brad, committee member; Sega, Ronald, committee memberThis work highlights the challenge of implementing digital engineering (DE) practices, specifically model-based systems engineering (MBSE) and DevOps, in very small entities (VSEs) that deliver software products. VSEs often face unique challenges due to their limited resources and project scale. Various organizations have authored strategies for DE advancement, such as the Department of Defense's Digital Engineering Strategy and INCOSE's System Engineering 2035 that highlight the need for improved DE practices across the engineering fields. This work proposes a hybrid methodology named FlexOps, combining MBSE and DevOps, to address these challenges. The authors highlight the challenges faced by VSEs and emphasize that MBSE and DevOps adoption in VSEs requires careful consideration of factors like cost, skill availability, and customer needs. The motivation for the research stems from the difficulties faced by VSEs in implementing processes designed for larger companies. The authors aim to provide a stepping stone for VSEs to adopt DE practices through the hybrid FlexOps methodology, leveraging existing MBSE and DevOps practices while accommodating smaller project scales. This work emphasizes that VSEs supporting government contracts must also adopt DE practices to meet industry directives. The implementation of FlexOps in two case studies highlights its benefits, such as offering a stepping stone to DE practices, combining Agile, MBSE, and DevOps strategies, and addressing VSE-specific challenges. The challenges faced by VSEs in adopting DE practices may be incrementally improved by adopting a hybrid method: FlexOps. FlexOps was designed to bridge the gap between traditional practices and DE for VSEs delivering software products.Item Open Access An analysis of the costs and performance of vehicles fueled by alternative energy carriers(Colorado State University. Libraries, 2024) Lynch, Alexander, author; Bradley, Thomas, advisor; Coburn, Tim, committee member; Olsen, Daniel B., committee memberThe transportation sector stands at the crossroads of new challenges and opportunities, driven by the pressing need to mitigate environmental impacts, enhance energy efficiency, and ensure sustainable mobility solutions. This transition will occur across diverse transportation modes, each with distinct characteristics and challenges. From light duty vehicles embracing electrification to maritime transport adopting alternative fuel engines, the push for low-carbon technology is reshaping the landscape of transportation. In this context, it is necessary to conduct a review and assessment of technologies, environmental benefits, and costs of alternative fuels and powertrains across a broad set of applications in the transportation sector. This study seeks to perform this assessment by combining bottom-up cost analysis, environmental assessments, and reviews of the literature to examine the techno-economic aspects of various fuel and powertrain options in the transportation sector. This approach involves detailed evaluations of individual components and systems to model the cost structures and efficiency profiles of vehicles. The results illustrated in this thesis will be embedded into adoption models to enable governments, utilities, private fleets, and other shareholders to make informed transportation planning decisions.Item Embargo Performance of continuous emission monitoring systems at operating oil and gas facilities(Colorado State University. Libraries, 2024) Day, Rachel Elizabeth, author; Riddick, Stuart, advisor; Zimmerle, Daniel, advisor; Blanchard, Nathaniel, committee member; Marzolf, Greg, committee memberGlobally, demand to reduce methane (CH4) emissions has become paramount and the oil and natural gas (O&G) sector is highlighted as one of the main contributors, being the largest industrial emission source at ≈30%. In efforts to follow legislation of CH4 emission reductions, O&G operators, emission measurement solution companies, and researchers have been testing various techniques and technologies to accurately measure and quantify CH4 emissions. As recent changes to U.S. legislative policies in the Greenhouse Gas Reporting Program (GHGRP) and Inflation Reduction Act (IRA) are imposing a methane waste emission charge beginning in 2024, O&G operators are looking for more continuous and efficient methods to effectively measure emissions at their facilities. Prior to these policy updates, bottom-up measurement methods were the main technique used for reporting yearly emissions to the GHGRP, which involves emission factors and emission source activity data. Top-down measurement methods such as fly-overs with airplanes, drones, or satellites, can provide snap in time surveys of the overall site emissions. With prior research showing the variance between top-down and bottom-up emission estimates, O&G operators have become interested in continuous emissions monitoring systems (CEMs) for their sites to see emission activity continually overtime. A type of CEM, a continuous monitoring (CM) point sensor network (PSN), monitors methane emissions continuously with sensors mounted at the perimeter of O&G sites. CM PSN solutions have become appealing, as they could potentially offer a relatively cost effective and autonomous method of identifying sporadic and fugitive leaks. This study evaluated multiple commercially available CM PSN solutions under single-blind controlled release testing conducted at operational upstream and midstream O&G sites. During releases, PSNs reported site-level emission rate estimates of 0 kg/h between 38-86% of the time. When non-zero site-level emission rate estimates were provided, no linear correlation between release rate and reported emission rate estimate was observed. The average, aggregated across all PSN solutions during releases, shows 5% of mixing ratio readings at downwind sensors were greater than the site's baseline plus two standard deviations. Four of six total PSN solutions tested during this field campaign provided site-level emission rate estimates with the site average relative error ranging from -100% to 24% for solution D, -100% to -43% for solution E, -25% for solution F (solution F was only at one site), and -99% to 430% for solution G, with an overall average of -29% across all sites and solutions. Of all the individual site-level emission rate estimates, only 11% were within ± 2.5 kg/h of the study team's best estimate of site-level emissions at the time of the releases.Item Open Access Advancing medium- and heavy-duty electric vehicle adoption models with novel natural language processing metrics(Colorado State University. Libraries, 2024) Ouren, Fletcher, author; Bradley, Thomas H., advisor; Coburn, Timothy, committee member; Windom, Bret, committee memberThe transportation sector must rapidly decarbonize to meet its emissions reduction targets. Medium- and heavy-duty decarbonization is lagging behind the light-duty industry due to technical and operational challenges and the choices made by medium- and heavy-duty fleet operators. Research investigating the procurement considerations of fleets has relied heavily on interviews and surveys, but many of these studies need higher participation rates and are difficult to generalize. To model fleet operators' decision-making priorities, this thesis applies a robust text analysis approach based on latent Dirichlet allocation and Bi-directional Encoder Representations of Transformers to two broad corpora of fleet adoption literature from academia and industry. Based on a newly developed metric, this thesis finds that the academic corpus emphasizes the importance of suitability, familiarity, norms, and brand image. These perception rankings are then passed to an agent-based model to determine how differences in perception affect adoption predictions. The results show a forecast of accelerated medium- and heavy-duty electric vehicle adoption when using the findings from the academic corpus versus the industry corpus.