Repository logo
 

Theses and Dissertations

Permanent URI for this collection

Browse

Recent Submissions

Now showing 1 - 20 of 96
  • ItemOpen Access
    Modeling energy systems using large data sets
    (Colorado State University. Libraries, 2024) Duggan, Gerald P., author; Young, Peter, advisor; Zimmerle, Daniel, advisor; Bradley, Thomas, committee member; Carter, Ellison, committee member
    Modeling and simulation are playing an increasingly import role in the sciences, and science is having a broader impact on policy definition at a local, national, and global scale. It is therefore important that simulations which impact policy produce high-quality results. The veracity of these models depend on many factors, including the quality of input data, the verification process for the simulations, and how result data are transformed into conclusions. Input data often comes from multiple sources and it is difficult to create a single, verified data set. This dissertation describes the challenges in creating a research-quality, verified and aggregated data set. It then offers solutions to these challenges, then illustrates the process using three case studies of published modeling and simulation results from different application domains.
  • ItemOpen Access
    Managing risk in commercial-off-the-shelf based space hardware systems
    (Colorado State University. Libraries, 2024) Herbert, Eric W., author; Bradley, Thomas, advisor; Sega, Ronald, advisor; Herber, Daniel, committee member; Shahroudi, Kamran, committee member; Wise, Daniel, committee member
    The space industry is experiencing a dynamic renaissance. From 2005 to 2021, the industry has exhibited a 265% increase in commercial and government investment [1]. The demand is forecasted to continue its upward trajectory by an added 55% by 2026 [1]. So, the aerospace industry continually seeks innovative space hardware solutions to reduce cost and to shorten orbit insertion schedules. Using Commercial-Off-the-Shelf (COTS) components to build space-grade hardware is one method that has been proposed to meet these goals. However, using non-space-grade COTS components requires designers to identify and manage risks differently early in the development stages. Once the risks are identified, then sound and robust risk management efforts can be applied. The methods used must verify that the COTS are reliable, resilient, safe, and able to survive rigorous and damaging launch and space environments for the mission's required longevity or that appropriate mitigation measures can be taken. This type of risk management practice must take into consideration form-fit-function requirements, mission objectives, size-weight-and-performance (SWaP) constraints, how the COTS will perform outside of its native applications, manufacturing variability, and lifetime expectations, albeit using a different lens than those traditionally used. To address these uncertainties associated with COTS the space industry can employ a variety of techniques like performing in-depth component selections, optimizing designs, instituting robust stress screening, incorporating protective and preventative measures, or subjecting the hardware to various forms of testing to characterize the hardware's capabilities and limitations. However, industrial accepted guidance to accomplish this does not reside in any standard or guide despite space program policies encouraging COTS use. One reason is because companies do not wish to reveal their proprietary methods used to evaluate COTS which, if broadcast, could benefit their market competition. Another is that high value spacecraft sponsors still cling to low-risk time consuming and expensive techniques that require the use of space hardware built with parts that have historical performance pedigrees. Keeping this data hidden does not help the space industry, especially when there is a push to field space systems that are built with modern technologies at a faster rate. This is causing a change in basic assumptions as stakeholders begin to embrace using parts from other industries such as the automotive, aviation, medical, and the like on a more frequent basis. No longer are COTS relegated to use in CubeSats or research and development spacecraft that have singular and limited missions that are expected to function for a brief period. This is because COTS that are produced for terrestrial markets are equally as dependable because of the optimized manufacturing and quality control techniques that reduce product variability. This increases the use of COTS parts in space hardware designs where until recently space programs had dared not to tread. But using COTS does come with a unique set of uncertainties and risks that still need to be identified and mitigated. Despite legacy risk management tools being mature and regularly practiced across a diverse industrial field, there is not a consensus on which risk management tools are best to use when evaluating COTS for space hardware applications. However, contained within technical literature amassed over the last twenty-plus years there exists significant systems engineering controls and enablers that can be used to develop robust COTS-use risk management frameworks. The controls and enablers become the basis to identify where aleatory and epistemic uncertainties exist within a COTS-based space system hardware design. With these statements in mind, unique activities can be defined to analyze, evaluate, and mitigate the uncertainties and the inherent risks to an acceptable level or to determine if a COTS-based design is not appropriate. These concepts were explored and developed in this research. Specifically, a series of COTS centric risk management frameworks were developed that can be used as a roadmap when considering integrating COTS into space hardware designs. From these frameworks unique risk evaluation processes were developed that identified the unique activities needed to effectively evaluate the non-space grade parts being considered. The activities defined in these risk evaluation processes were tailored to uncover as much uncertainty as possible so that appropriate risk mitigation techniques could be applied, design decisions could be quickly made from an informed perspective, and spacecraft fielding could be accomplished at an accelerated rate. Instead of taking five to ten years to field a spacecraft, it can now take less than one to three years. Thus, if effectively used, COTS integration can be a force multiplier throughout the space industry. But first, the best practices learned over the last few decades must be collected, synthesized, documented, and applied. To validate the risk frameworks discussed, a COTS-based space-grade secondary lithium-ion battery was chosen to demonstrate that the concepts could work. Unique risk evaluation activities were developed that took into consideration the spacecraft's mission, environment, application, and lifetime (MEAL) [2] attributes to characterize the battery's COTS cells, printed circuit board, electrical design, and electrical-electronic-electromechanical (EEE) performance, strengths, and weaknesses. The activities defined and executed included risk evaluation activities that included a variety of modeling, analyses, non-destructive examinations, destructive physical assessments, environmental testing, worst case scenario testing, and manufacturing assessments. These activities were developed based on the enablers and controls extracted from the data that was resident in the literature that was reviewed. The techniques employed proved quite successful in uncovering and mitigating numerous aleatory and epistemic uncertainties. The mitigation of these uncertainties significantly improved the battery's design and improved the battery's performance. As a result, the COTS-based battery was successfully built, qualified, and flown on a fleet of launch vehicles and payloads. The information that follows documents how the risk management frameworks were created, what influenced its architecture, and how these were successfully validated. Validating the COTS centric risk management framework was important because it demonstrated the risk management frameworks' utility to uncover uncertainty. It also proved that methods exist that can be readily employed that are not typically within the scope of traditional space hardware design and qualification techniques. This is important because it provides the industry a new set of systems engineering tools that can be employed to limit the impact of supply chain constraints, reduce reliance on expensive low-yield hardware procurement practices, and minimize the amount of obsolete hardware in designs which tend to constrain the space system hardware's performance. As a result, the techniques developed in this research start to fill a gap that exists in the space industry's systems engineering toolbox.
  • ItemOpen Access
    Structural health monitoring in adhesively bonded composite joints
    (Colorado State University. Libraries, 2024) Caldwell, Steven, author; Radford, Donald W., advisor; Simske, Steven, committee member; Cale, James, committee member; Adams, Henry, committee member
    Composite bonded aircraft structure is a prevalent portion of today's aircraft structural composition. Adequate bond integrity is a critical aspect of the fabrication and operational service life of aircraft structure. Many of these structural bonds are critical for flight safety. Thus, a major concern is related to the assurance of quality in the structural bond. Over the last decade, non-destructive bond evaluation techniques have improved but still cannot detect a structurally weak bond that exhibits full adherend/adhesive contact. Currently, expensive, and time-consuming structural proof testing is required to verify bond integrity. The objective of this work is to investigate the feasibility of bondline integrity monitoring via piezoelectric sensors embedded in the composite joint. Initially, a complex composite joint, the Pi preform, was analytically evaluated for health monitoring viability, with the results showing promising capability. Subsequently, due to experimental complexities, a simple, state-of-the-art composite single lap shear joint is selected for experimentation and analysis to measure and quantify the effect of incorporating a sensor within the bondline to evaluate and expand on the ability of the embedded sensor to monitor and assess the joint integrity. Simple flatwise tension joints are also studied to investigate an orthogonal loading direction through the sensor. The experimental results indicate that the embedded piezoelectric sensors can measure a change in the joint before the integrity degrades and fails on subsequent loadings, resulting in a novel approach for prognostic performance evaluation without detrimentally affecting the performance of the structural joint.
  • ItemOpen Access
    Raw material optimization and COâ‚‚ sensitivity-predictive analytics in cement manufacturing: a case study at Union Bridge Plant, Heidelberg Materials, Maryland
    (Colorado State University. Libraries, 2024) Boakye, Kwaku, author; Simske, Steve, advisor; Bradley, Tom, committee member; Troxell, Wade, committee member; Goemans, Chris, committee member
    Cement has been in use by humans throughout history, and its manufacturing process has undergone many changes. The high increase in economic growth around the world and the demand for rapid infrastructure development due to population growth are the underlying reasons for the globally high cement demand. Cement is produced by grinding clinker together with a mixture of ground gypsum. The clinker is produced using a rotary kiln which burns a mixture of limestone, clay, magnesium, silica, and iron with desired atomic percentages through the calcination process. The quarry serves as the main source of raw material for the rotary kiln in cement production. Over the years cement manufacturing has hurt environmental, social, and political aspects of society. This negative impact includes the overuse of raw material which is obtained by mining resulting in disturbed landmass, overproduction of rock waste material, and the emission of CO2 resulting from the calcination of limestone in the pyro process. The study looks at three cement manufacturing systems and uses different methodologies to achieve results that can be implemented in the cement industry. These three systems were (1) the quarry (2) the preheat tower and (3) the kiln. Ensuring the consistency of material feed chemistry, with the quarry playing a pivotal role, is essential for optimizing the performance of a rotary kiln. The optimization of the raw material also allows limited use of raw materials for cement manufacturing, cutting down waste. The study employed a six-step methodology, incorporating a modified 3D mining software modeling tool, a database computer loop prediction tool, and other resources to enhance mining sequencing, optimize raw material utilization, and ensure a consistent chemistry mix for the kiln. By using overburden as a raw material in the mix, the quarry nearly universally reduced the environmental impact of squandering unwanted material in the quarry. This has a significant environmental impact since it requires less space to manage the overburdened waste generated during mining. In addition, raw material usage was optimized for clinker production, causing a reduction of 4% in sand usage as raw material, a reduction in raw material purchase cost, a reduction of the variability of kiln feed chemistry, and the production of high-quality clinker. The standard deviation of kiln feed LSF experienced a 45 percent improvement, leading to a 65 percent reduction in the variability of kiln feed. The study also uses machine learning methods to model different stages of the calcination process in cement and to improve knowledge of the generation of CO2 during cement manufacturing. Calcination plays a crucial role in assessing clinker quality, energy requirements, and CO2 emissions within a cement-producing facility. However, due to the complexity of the calcination process, accurately predicting CO2 emissions has historically been challenging. The objective of this study is to establish a direct relationship between CO2 generation during the raw material manufacturing process and various process factors. In this paper, six machine-learning techniques are explored to analyze two output variables: (1) the apparent degree of oxidation, and (2) the apparent degree of calcination. Sensitivity analysis of CO2 molecular composition (on a dry basis) utilizes over 6000 historical manufacturing health data points as input variables, and the findings are utilized to train the algorithms. The Root Mean Squared Error (RMSE) of various regression models was examined, and the models were then run to ascertain which independent variables in cement manufacturing had the largest impact on the dependent variables. To establish which independent variable had the biggest impact on CO2 emissions, the significance of the other factors was also assessed.
  • ItemOpen Access
    Autonomous trucks as a scalable system of systems: development, constituent systems communication protocols and cybersecurity
    (Colorado State University. Libraries, 2024) Elhadeedy, Ahmed, author; Daily, Jeremy, advisor; Chong, Edwin, committee member; Papadopoulos, Christos, committee member; Luo, Jie, committee member
    Driverless vehicles are complex to develop due to the number of systems required for safe and secure autonomous operation. Autonomous vehicles embody the definition of a system of systems as they incorporate several systems to enable functions like perception, decision-making, vehicle controls, and external communication. Constituent systems are often developed by different vendors globally which introduces challenges during the development process. Additionally, as the fleet of autonomous vehicles scales, optimization of onboard and off-board communication between the constituent systems becomes critical. Autonomous truck and trailer configurations face challenges when operating in reverse due to the lack of sensing on the trailer. It is anticipated that sensor packages will be installed on existing trailers to extend autonomous operations while operating in reverse in uncontrolled environments, like a customer's loading dock. Power Line Communication (PLC) between the trailer and the tractor cannot support high bandwidth and low latency communication. Legacy communications use powerline carrier communications at 9600 baud, so upfitting existing trailers for autonomous operations will require adopting technologies like Ethernet or a wireless harness between the truck and the trailer. This would require additional security measures and architecture, especially when pairing a tractor with a trailer. We proposed tailoring the system of systems Model for autonomous vehicles. The model serves as the governing framework for the development of constituent systems. It's essential for the SoS model to accommodate various development approaches that are used for hardware, and software such as Agile, or Vee models. Additionally, a queuing model for certificates authentication compares the named certificate approach with the traditional approach. The model shows the potential benefits of named certificates when the autonomous vehicles are scaled. We also proposed using named J1939 signals to reduce complexities and integration efforts when multiple on-board or off-board systems request vehicle signals. We discuss the current challenges and threats on autonomous truck-trailer communication when Ethernet or a wireless harness is used, and the impact on the Electronic Control Unit (ECU) lifecycle. In addition to using Named Data Networking (NDN) to secure in-vehicle and cloud communication. Named Data Networking can reduce the complexity of the security of the in-vehicle communication networks where it provides a networking solution with security by design.
  • ItemOpen Access
    Optimization of water infrastructure design and management in multi-use river basins under a changing climate
    (Colorado State University. Libraries, 2024) Hunu, Kenneth D., author; Conrad, Steven, advisor; DePue, Michael, committee member; Grigg, Neil, committee member; Bradley, Thomas, committee member; Sharvelle, Sybil, committee member
    Traditional approaches to the hydrologic design of water infrastructure assume that the climate is stationary, and that historical data reflect future conditions. The earth's climate is not stationary but changing with time. The traditional approach may, therefore, no longer be applicable. In addition to the issue of nonstationarity of climate, the design of water infrastructure to meet a particular need, such as water supply, is often assumed to be a single-objective optimization problem and is done without consideration of other competing watershed uses and constraints such as recreation, hydropower generation, environmental flows, and flood control. Such an approach routinely fails to adequately address the challenges of complex systems such as multi-use river basins that require an understanding of the linkages between the various uses and stakeholders. Water infrastructure design will benefit from a holistic and systems engineering approach that maximizes the value to all users while serving its primary function. The objective of this research was to identify and develop a new approach for designing and managing water infrastructure in multi-use basins that accounts for the effects of climate change by shifting the current static design paradigm to a more dynamic paradigm and accounts for other multi-use basin objectives, which may include recreation, hydropower generation, flood control, environmental flows, and water supply. This research involved an extensive literature review, exploration of concepts to solve the identified problems, data collection, and development of a decision support research tool that is formulated such that it can be used to test the viability of various hypotheses. This dissertation presents a practical approach for designing and managing water infrastructure that uses quantifiable hydrological estimates of the future climate and accounts for multiple river basin objectives from stakeholders. The approach is a hybrid approach that applies the updated flood frequency methodology for accounting for climate change and an adaptive management framework for managing uncertainty and multiple basin objectives. The adaptive management framework defines and maintains baseline objectives of existing climate stressors and basin users while designing the primary water infrastructure, in a manner that accounts for nonstationarity and uncertainty. The adaptive management approach allows for regular review and refinement of the application of climate data and adjustments to basin objectives, thereby reducing uncertainty within the data needed for decision-making. This new approach provides a cost-effective way to use climate change projections, is applicable to all basins and projects irrespective of geographic location, size, or basin uses, and has minimal subjective components thereby making it reproducible.
  • ItemOpen Access
    Hybrid MBSE-DevOps model for implementation in very small enterprises
    (Colorado State University. Libraries, 2024) Simpson, Cailin R., author; Simske, Steven, advisor; Miller, Erika, committee member; Reisfeld, Brad, committee member; Sega, Ronald, committee member
    This work highlights the challenge of implementing digital engineering (DE) practices, specifically model-based systems engineering (MBSE) and DevOps, in very small entities (VSEs) that deliver software products. VSEs often face unique challenges due to their limited resources and project scale. Various organizations have authored strategies for DE advancement, such as the Department of Defense's Digital Engineering Strategy and INCOSE's System Engineering 2035 that highlight the need for improved DE practices across the engineering fields. This work proposes a hybrid methodology named FlexOps, combining MBSE and DevOps, to address these challenges. The authors highlight the challenges faced by VSEs and emphasize that MBSE and DevOps adoption in VSEs requires careful consideration of factors like cost, skill availability, and customer needs. The motivation for the research stems from the difficulties faced by VSEs in implementing processes designed for larger companies. The authors aim to provide a stepping stone for VSEs to adopt DE practices through the hybrid FlexOps methodology, leveraging existing MBSE and DevOps practices while accommodating smaller project scales. This work emphasizes that VSEs supporting government contracts must also adopt DE practices to meet industry directives. The implementation of FlexOps in two case studies highlights its benefits, such as offering a stepping stone to DE practices, combining Agile, MBSE, and DevOps strategies, and addressing VSE-specific challenges. The challenges faced by VSEs in adopting DE practices may be incrementally improved by adopting a hybrid method: FlexOps. FlexOps was designed to bridge the gap between traditional practices and DE for VSEs delivering software products.
  • ItemOpen Access
    An analysis of the costs and performance of vehicles fueled by alternative energy carriers
    (Colorado State University. Libraries, 2024) Lynch, Alexander, author; Bradley, Thomas, advisor; Coburn, Tim, committee member; Olsen, Daniel B., committee member
    The transportation sector stands at the crossroads of new challenges and opportunities, driven by the pressing need to mitigate environmental impacts, enhance energy efficiency, and ensure sustainable mobility solutions. This transition will occur across diverse transportation modes, each with distinct characteristics and challenges. From light duty vehicles embracing electrification to maritime transport adopting alternative fuel engines, the push for low-carbon technology is reshaping the landscape of transportation. In this context, it is necessary to conduct a review and assessment of technologies, environmental benefits, and costs of alternative fuels and powertrains across a broad set of applications in the transportation sector. This study seeks to perform this assessment by combining bottom-up cost analysis, environmental assessments, and reviews of the literature to examine the techno-economic aspects of various fuel and powertrain options in the transportation sector. This approach involves detailed evaluations of individual components and systems to model the cost structures and efficiency profiles of vehicles. The results illustrated in this thesis will be embedded into adoption models to enable governments, utilities, private fleets, and other shareholders to make informed transportation planning decisions.
  • ItemOpen Access
    Advancing medium- and heavy-duty electric vehicle adoption models with novel natural language processing metrics
    (Colorado State University. Libraries, 2024) Ouren, Fletcher, author; Bradley, Thomas H., advisor; Coburn, Timothy, committee member; Windom, Bret, committee member
    The transportation sector must rapidly decarbonize to meet its emissions reduction targets. Medium- and heavy-duty decarbonization is lagging behind the light-duty industry due to technical and operational challenges and the choices made by medium- and heavy-duty fleet operators. Research investigating the procurement considerations of fleets has relied heavily on interviews and surveys, but many of these studies need higher participation rates and are difficult to generalize. To model fleet operators' decision-making priorities, this thesis applies a robust text analysis approach based on latent Dirichlet allocation and Bi-directional Encoder Representations of Transformers to two broad corpora of fleet adoption literature from academia and industry. Based on a newly developed metric, this thesis finds that the academic corpus emphasizes the importance of suitability, familiarity, norms, and brand image. These perception rankings are then passed to an agent-based model to determine how differences in perception affect adoption predictions. The results show a forecast of accelerated medium- and heavy-duty electric vehicle adoption when using the findings from the academic corpus versus the industry corpus.
  • ItemEmbargo
    Performance of continuous emission monitoring systems at operating oil and gas facilities
    (Colorado State University. Libraries, 2024) Day, Rachel Elizabeth, author; Riddick, Stuart, advisor; Zimmerle, Daniel, advisor; Blanchard, Nathaniel, committee member; Marzolf, Greg, committee member
    Globally, demand to reduce methane (CH4) emissions has become paramount and the oil and natural gas (O&G) sector is highlighted as one of the main contributors, being the largest industrial emission source at ≈30%. In efforts to follow legislation of CH4 emission reductions, O&G operators, emission measurement solution companies, and researchers have been testing various techniques and technologies to accurately measure and quantify CH4 emissions. As recent changes to U.S. legislative policies in the Greenhouse Gas Reporting Program (GHGRP) and Inflation Reduction Act (IRA) are imposing a methane waste emission charge beginning in 2024, O&G operators are looking for more continuous and efficient methods to effectively measure emissions at their facilities. Prior to these policy updates, bottom-up measurement methods were the main technique used for reporting yearly emissions to the GHGRP, which involves emission factors and emission source activity data. Top-down measurement methods such as fly-overs with airplanes, drones, or satellites, can provide snap in time surveys of the overall site emissions. With prior research showing the variance between top-down and bottom-up emission estimates, O&G operators have become interested in continuous emissions monitoring systems (CEMs) for their sites to see emission activity continually overtime. A type of CEM, a continuous monitoring (CM) point sensor network (PSN), monitors methane emissions continuously with sensors mounted at the perimeter of O&G sites. CM PSN solutions have become appealing, as they could potentially offer a relatively cost effective and autonomous method of identifying sporadic and fugitive leaks. This study evaluated multiple commercially available CM PSN solutions under single-blind controlled release testing conducted at operational upstream and midstream O&G sites. During releases, PSNs reported site-level emission rate estimates of 0 kg/h between 38-86% of the time. When non-zero site-level emission rate estimates were provided, no linear correlation between release rate and reported emission rate estimate was observed. The average, aggregated across all PSN solutions during releases, shows 5% of mixing ratio readings at downwind sensors were greater than the site's baseline plus two standard deviations. Four of six total PSN solutions tested during this field campaign provided site-level emission rate estimates with the site average relative error ranging from -100% to 24% for solution D, -100% to -43% for solution E, -25% for solution F (solution F was only at one site), and -99% to 430% for solution G, with an overall average of -29% across all sites and solutions. Of all the individual site-level emission rate estimates, only 11% were within ± 2.5 kg/h of the study team's best estimate of site-level emissions at the time of the releases.
  • ItemOpen Access
    The application of model-based systems engineering to understand security of systems using SAE J1939
    (Colorado State University. Libraries, 2024) Salinger, Gabe, author; Daily, Jeremy, advisor; Herber, Daniel, committee member; Windom, Bret, committee member
    The Engineering community is adopting a Digital Engineering approach enabled by Model-Based Systems Engineering (MBSE) as an effective tool for designing complex systems. As technology continues to rapidly advance, security risk mitigation and requirements engineering is becoming a prominent and important factor in the cybersecurity domain. As a result, engineering methods and frameworks must constantly be improved and updated to implement the successful realization of cyber-physical systems (CPS). With the inherent connectivity, accessibility, and lack of security making CPSs attractive targets for cyber attacks, integrating security considerations into system development is crucial. With 'security by design' being a fundamental pillar of system development, MBSE plays a pivotal role in shaping secure system architectures. In this thesis, I explore the application of MBSE to the system security domain, focusing on secure system development and the incorporation of security by design throughout the system development phase. This is accomplished by investigating the utility of MBSE in understanding the vulnerabilities of a Medium to Heavy Duty (MHD) vehicle, improving its security posture, and providing recommendations on how to improve the process. This is achieved by first exploring the utility of simulation using model-based tools to better understand complex systems, and bridge the gap between bottom-up and top-down approaches. Next, an established method, MBSEsec, is applied to the system of interest (SOI) to develop security controls for the vehicle's transport protocol. Additionally, recommendations are provided for improving the method's effectiveness in documenting vulnerabilities, and risk. MBSEsec is a security-focused MBSE method using SysML to develop a system architecture that highlights security design considerations. The method's structured workflow facilitates the elicitation of security requirements and controls using specific systems modeling activities. The primary focus is on the heavy vehicle network transport protocol, J1939, serving as the SOI. The discovery and validation of new exploits that take advantage of vulnerabilities in the data-link layer of the protocol highlights the need to elicit better security requirements for cyber-physical systems (CPS). Using the J1939 network as the SOI for this work allows the models to be supported by and validated with on-vehicle testing. This work contributes a survey of modeling approaches for secure system design. Lastly, this thesis details the development of a novel approach for system-level mission-focused security goal elicitation. EGRESS: Eliciting Goals for Requirement Engineering of Secure Systems, incorporates best practices from security requirement engineering works, and utilizes Model-Based Systems Engineering to formulate security goals for cyber-physical systems, aiming to create more comprehensive security requirements.
  • ItemOpen Access
    Development and quasi-experimental study of the Scrum model-based system architecture process (sMBSAP) for agile model-based software engineering
    (Colorado State University. Libraries, 2023) Huss, Moe, author; Herber, Daniel R., advisor; Borky, John M., advisor; Miller, Erika, committee member; Mallette, Paul, committee member
    Model-Based Systems Engineering (MBSE) is an architecture-based software development approach. Agile, on the other hand, is a light system development approach that originated in software development. To bring together the benefits of both approaches, this research is divided into two stages. The first stage proposes an integrated Agile MBSE approach that adopts a specific instance of the Agile approach (i.e., Scrum) in combination with a specific instance of an MBSE approach (i.e., Model-Based System Architecture Process — "MBSAP") to create an Agile MBSE approach called the integrated Scrum Model Based System Architecture Process (sMBSAP). The proposed approach was validated through an experimental study that developed a health technology system over one year, successfully producing the desired software product. This work focuses on determining whether the proposed sMBSAP approach can deliver the desired Product Increments with the support of an MBSE process. The interaction of the Product Development Team with the MBSE tool, the generation of the system model, and the delivery of the Product Increments were observed. The results showed that the proposed approach contributed to achieving the desired system development outcomes and, at the same time, generated complete system architecture artifacts that would not have been developed if Agile had been used alone. Therefore, the first contribution of this stage lies in introducing a practical and operational method for merging Agile and MBSE. In parallel, the results suggest that sMBSAP is a middle ground that is more aligned with federal and state regulations, as it addresses the technical debt concerns. The second stage of this research compares Reliability of Estimation, Productivity, and Defect Rate metrics for sprints driven by Scrum versus sMBSAP through the experimental study in stage 1. The quasi-experimental study conducted ten sprints using each approach. The approaches were then evaluated based on their effectiveness in helping the Product Development Team estimate the backlog items they can build during a time-boxed sprint and deliver more Product Backlog Items (PBI) with fewer defects. The Commitment Reliability (CR) was calculated to compare the Reliability of Estimation with a measured average Scrum-driven value of 0.81 versus a statistically different average sMBSAP-driven value of 0.94. Similarly, the average Sprint Velocity (SV ) for the Scrum-driven sprints was 26.8 versus 31.8 for the MBSAP-driven sprints. The average Defect Density (DD) for Scrum-driven sprints was 0.91, while that of sMBSAP-driven sprints was 0.63. The average Defect Leakage (DL) for Scrum-driven sprints was 0.20, while that of sMBSAP-driven sprints was 0.15. The t-test analysis concluded that the sMBSAP-driven sprints were associated with a statistically significant larger mean CR, SV , DD, and DL than that of the Scrum-driven sprints. The overall results demonstrate formal quantitative benefits of an Agile MBSE approach compared to Agile alone, strengthening the case for considering Agile MBSE methods within the software development community. Future work might include comparing Agile and Agile MBSE methods using alternative research designs and further software development objectives, techniques, and metrics. Future investigations may also test sMBSAP with non-software systems to validate the methodology across other disciplines.
  • ItemOpen Access
    Optimizing designer cognition relative to generative design methods
    (Colorado State University. Libraries, 2023) Botyarov, Michael, author; Miller, Erika, advisor; Bradley, Thomas, committee member; Forrest, Jeffrey, committee member; Moraes, Marcia, committee member; Simske, Steve, committee member; Radford, Donald, committee member
    Generative design is a powerful tool for design creation, particularly for complex engineering problems where a plethora potential design solutions exist. Generative design systems explore the entire solution envelope and present the designer with multiple design alternatives that satisfy specified requirements. Although generative design systems present design solutions to an engineering problem, these systems lack consideration for the human element of the design system. Human cognition, particularly cognitive workload, can be hindered when presented with unparsed generative design system output, thereby reducing the efficiency of the systems engineering life cycle. Therefore, the objective of this dissertation was to develop a structured approach to produce an optimized parsing of spatially different generative design solutions, derived from generative design systems, such that human cognitive performance during the design process is improved. Generative design usability foundation work was conducted to further elaborate on gaps found in the literature in the context of the human component of generative design systems. A generative design application was then created for the purpose of evaluating the research objective. A novel generative design solution space parsing method that leverages the Gower distance matrix and partitioning around medoids (PAM) clustering method was developed and implemented in the generative design application to structurally parse the generative design solution space for the study. The application and associated parsing method were then used by 49 study participants to evaluate performance, workload, and experience during a generative design selection process, given manipulation of both the quantity of designs in the generative design solution space and filtering of parsed subsets of design alternatives. Study data suggests that cognitive workload is lowest when 10 to 100 generative design alternatives are presented for evaluation in the subset of the overall design solution space. However, subjective data indicates a caution when limiting the subset of designs presented, since design selection confidence and satisfaction may be decreased the more limited the design alternative selection becomes. Given these subjective considerations, it is recommended that a generative design solution space consists of 50 to 100 design alternatives, with the proposed clustering parsing method that considers all design alternative variables.
  • ItemOpen Access
    Avoiding technical bankruptcy in system development: a process to reduce the risk of accumulating technical debt
    (Colorado State University. Libraries, 2023) Kleinwaks, Howard, author; Bradley, Thomas, advisor; Batchelor, Ann, advisor; Marzolf, Gregory, committee member; Wise, Daniel, committee member; Turner, John F., committee member
    The decisions made early in system development can have profound impacts on later capabilities of the system. In iterative systems development, decisions made in each iteration produce impacts on every future iteration. Decisions that have benefits in the short-term may damage the long-term health of the system. This phenomenon is known as technical debt. If not carefully managed, the buildup of technical debt within a system can lead to technical bankruptcy: the state where the system development can no longer proceed with its lifecycle without first paying back some of the technical debt. Within the schedule constrained development paradigm of iteratively and incrementally developed systems, it is especially important to proactively manage technical debt and to understand the potential long-term implications of decisions made to achieve short-term delivery goals. To enable proactive management of technical debt within systems engineering, it is first necessary to understand the state of the art with respect to the application of technical debt methods and terminology within the field. While the technical debt metaphor is well-known within the software engineering community, it is not as well known within the systems engineering community. Therefore, this research first characterizes the state of technical debt research within systems engineering through a literature review. Next, the prevalence of the technical debt metaphor among practicing systems engineers is established through an empirical survey. Finally, a common ontology for technical debt within systems engineering is proposed to enable clear and concise communication about the common problems faced in different systems engineering development programs. Using the research on technical debt in systems engineering and the ontology, this research develops a proactive approach to managing technical debt in iterative systems development by creating a decision support system called List, Evaluate, Achieve, Procure (LEAP). The LEAP process, when used in conjunction with release planning methods, can identify the potential for technical debt accumulation and eventually technical bankruptcy. The LEAP process is developed in two phases: a qualitative approach to provide initial assessments of the state of the system and a quantitative approach that models the effects of technical debt on system development schedules and the potential for technical bankruptcy based on release planning schedules. Example applications of the LEAP process are provided, consisting of the development of a conceptual problem and real applications of the process at the Space Development Agency. The LEAP process provides a novel and mathematical linkage of the temporal and functional dependencies of system development with the stakeholder needs, enabling proactive assessments of the ability of the system to satisfy those stakeholder needs. These assessments enable early identification of potential technical debt, reducing the risk of negative long-term impacts on the system health.
  • ItemOpen Access
    Systems engineering assessment and experimental evaluation of quality paradigms in high-mix low-volume manufacturing environments
    (Colorado State University. Libraries, 2023) Normand, Amanda, author; Bradley, Thomas, advisor; Miller, Erika, committee member; Vans, Marie, committee member; Zhao, Jianguo, committee member; Sullivan, Shane, committee member
    This research aimed to evaluate the effectiveness of applying industrial paradigm application in high-mix low-volume manufacturing (HMLV) environments using a Systems Engineering approach. An analysis of existing industrial paradigms was conducted and then compared to a needs analysis for a specific HMLV manufacturer. Several experiments were selected for experimental evaluation, inspired by the paradigms, in a real-world HMLV manufacturing setting. The results of this research showed that a holistic approach to paradigm application is essential for achieving optimal performance, based on cost advantage, throughput, and flexibility, in the HMLV manufacturing environment. The findings of this research study provide insights into the importance of considering the entire manufacturing system, including both technical and human factors, when evaluating the effectiveness of industrial paradigms. Additionally, this research highlights the importance of considering the unique characteristics of HMLV manufacturing environments, such as the high degree of variability and frequent changes in product mix in designing manufacturing systems. Overall, this research demonstrates the value of a systems engineering approach in evaluating and implementing industrial paradigms in HMLV manufacturing environments. The results of this research provide a foundation for future research in this field and can be used to guide organizations in making informed decisions about production management practices in HMLV manufacturing environments.
  • ItemOpen Access
    The next generation space suit: a case study of the systems engineering challenges in space suit development
    (Colorado State University. Libraries, 2023) Cabrera, Michael A., author; Simske, Steve, advisor; Marzolf, Greg, committee member; Miller, Erika, committee member; Delgado, Maria, committee member
    The objective for a NASA contractor, the performing organization in this case study, is to develop and deliver the next generation space suit to NASA, the customer in this case study, against a radically different level of customer expectation from previous years. In 2019, the administration had proposed a return to the moon, thus transforming and changing the system context of the current, next generation space suit in addition to pushing schedule expectations forward two years. The purpose of this dissertation will serve as a case study in two specific areas with qualitative and quantitative analyses regarding a new process and approach to (i) project lifecycle development and (ii) requirements engineering with the intent that if utilized, these tools may have contributed to improvements across the project in terms of meeting cost, scope, budget and quality while appropriately accounting for risk management. The procedure entails a research method in which the current state of the project, current state of the art, and the identified systems engineering challenges are evaluated and iterative models are tempered through development by continual improvements by engineering evaluation of engineers on the project. The current results have produced (i) a prototype project lifecycle development method via agile, Lean and Scrum hybrid implementations into a Traditional Waterfall framework and (ii) a prototype requirements engineering scorecard with implementations of FMEA and quantitative analysis to determine root cause identification.
  • ItemOpen Access
    Using above-ground downwind methane and meteorological measurements to estimate the below-ground leak rate of a natural gas pipeline
    (Colorado State University. Libraries, 2023) Cheptonui, Fancy, author; Riddick, Stuart N., advisor; Zimmerle, Daniel J., advisor; Fischer, Emily, committee member
    Natural gas (NG) leaks from below-ground pipelines present a safety, economic, and environmental hazard, and triaging the severity of leaks remains a significant issue for pipeline operators. Typically, operators conduct walking surveys using hand-held methane (CH4) detectors which output CH4 concentrations to indicate the location of a leak, but quantification often requires excavation of the pipeline. Industry-standard CH4 detectors are lower-cost and have a higher detection threshold and lower precision than optical-cavity CH4 analyzers typically used to quantify emissions. It remains unclear whether coarser CH4 concentration measurements could be used to identify the large leaks that require immediate response. To explore the utility of industry-standard detectors, above-ground downwind CH4 concentration measurements made by the detectors as input to a novel modeling framework, the ESCAPE-1 model were used to estimate the leak rates from below-ground NG pipelines. Controlled below-ground emission experiments were conducted to test this approach over a range of environmental conditions. Using 10-minute averaged CH4 mixing/meteorological data and filtering out low wind/Pasquill Gifford Stability Class (PGSC) A events, the ESCAPE-1 model estimates small distribution leaks (0.2 kg CH4 h-1) to within -31 to +75% (95% CI), and medium distribution leaks (0.8 kg CH4 h-1) to within -73 to +92%(95% CI) of the actual leak rate. When averaged over a longer period (more than 3 hours of data), the average calculated leak rate was an overestimate of 55% for the small (0.2 kg CH4 h-1) leak and an underestimate of 6% for a medium distribution leak (0.8 kg CH4 h-1). Results suggest that as the wind speed increases, or the atmosphere becomes more stable both accuracy and precision of the leak rate calculated by the ESCAPE-1 model decreases. This is likely the result of a trade-off between the high enough wind to move the gas but not high enough that the plume becomes collimated and less homogenous. Optimizing this approach for oil and gas industry applications, this study suggests that CH4 mixing ratios measured by industry-standard CH4 detectors lasting at least 3 hours could be used as a guide to prioritize NG leak repair by estimating the below-ground leak rate from a pipeline within reasonable uncertainty bounds (±55%) in favorable atmospheric conditions.
  • ItemOpen Access
    Leveraging operational use data to inform the systems engineering process of fielded aerospace defense systems
    (Colorado State University. Libraries, 2023) Eddy, Amy, author; Daily, Jeremy, advisor; Marzolf, Gregory, committee member; Miller, Erika, committee member; Wise, Daniel, committee member
    Inefficiencies in Department of Defense (DoD) Acquisition processes have been pervasive nearly as long as the DoD has existed. Stakeholder communication issues, funding concerns, large and overly complex organizational structures all play a role in adding challenges to those tasked with fielding, operating, and sustaining a complex aerospace defense system. As legacy defense systems begin to age, logistics and other supportability element requirements may change over time. While research literature supports the evidence that many stakeholders and senior leaders are aware of the issues and the DoD faces the impact those issues cause to mission performance, most research and attempts to improve the performance issues have been focused on high level restructuring of organizations or policy, processes, and procedures. There has been little research dedicated to identifying ways for working level logisticians and systems engineers to improve performance by leveraging operational use data. This study proposes a practical approach for working level logisticians and engineers to identify relationships between operational use data and supply performance data. This research focuses on linking negative aircraft events (discrepancies) to the supply events (requisitions) that result in downtime. This approach utilizes standard statistical methods to analyze operations, maintenance, and supply data collected during the Operations and Sustainment (O&S) phase of the life cycle. Further, this research identifies methods consistent with industry systems engineering practices to create new feedback loops to better inform the systems engineering life cycle management process, update requirements, and iterate the design of the enterprise system as a holistic entity that includes the physical product and its supportability elements such as logistics, maintenance, facilities, etc. The method identifies specific recommendations and actions for working level logisticians and systems engineers to prevent future downtime. The method is practical for the existing DoD organizational structure, and uses current DoD processes, all without increasing manpower or other resource needs.
  • ItemOpen Access
    Optimizing text analytics and document automation with meta-algorithmic systems engineering
    (Colorado State University. Libraries, 2023) Villanueva, Arturo N., Jr., author; Simske, Steven J., advisor; Hefner, Rick D., committee member; Krishnaswamy, Nikhil, committee member; Miller, Erika, committee member; Roberts, Nicholas, committee member
    Natural language processing (NLP) has seen significant advances in recent years, but challenges remain in making algorithms both efficient and accurate. In this study, we examine three key areas of NLP and explore the potential of meta-algorithmics and functional analysis for improving analytic and machine learning performance and conclude with expansions for future research. The first area focuses on text classification for requirements engineering, where stakeholder requirements must be classified into appropriate categories for further processing. We investigate multiple combinations of algorithms and meta-algorithms to optimize the classification process, confirming the optimality of Naïve Bayes and highlighting a certain sensitivity to the Global Vectors (GloVe) word embeddings algorithm. The second area of focus is extractive summarization, which offers advantages to abstractive summarization due to its lossless nature. We propose a second-order meta-algorithm that uses existing algorithms and selects appropriate combinations to generate more effective summaries than any individual algorithm. The third area covers document ordering, where we propose techniques for generating an optimal reading order for use in learning, training, and content sequencing. We propose two main methods: one using document similarities and the other using entropy against topics generated through Latent Dirichlet Allocation (LDA).
  • ItemOpen Access
    Sediment management alternatives analysis in the Louisiana deltaic plain
    (Colorado State University. Libraries, 2023) Heap, David A., author; Young, Peter, advisor; Zimmerle, Daniel, committee member; Grigg, Neil, committee member; Ross, Matthew, committee member
    While coastal communities around the world are under threat from rising sea levels, those of Southeast Louisiana are some of the most threatened. Including subsidence, the region could potentially see rates of net sea level rise up to ten times the global mean. There is no shortage of causes for how this situation has come to pass. A Systems Engineering solution needs to be multi-faceted, similar to how the problem was created:- Climate change: like any coastal area, the region has to make hard decisions on how to handle a changing climate, but those choices have significant ramifications for the entire U.S. population, as significant commerce passes through the regional ports in the form of agriculture, oil/gas, petrochemicals, and the fishing industry. - Engineered factors: by controlling the flow of the Mississippi River with the intent of flood protection through the use of levees, floodwalls, and spillways, humans have inhibited the natural processes that could rebuild the wetlands and natural protection barriers. - River navigation: similarly, the locks and dams that allow maritime traffic have trapped the sediment that historically would have flowed down to the delta and built more land buffers against the sea. - Industrial infrastructure: with thousands of miles of navigation channels and pipelines, the wetlands have been cut up into non-natural bodies of water, allowing hurricanes and saltwater intrusion unabated access to delicate ecosystems. - Environmental damage: over 100 years of industrial development, combined with numerous environmental disasters, has compromised the health of the ecosystem. - Invasive species: whether intentionally introduced or not, non-native species, both flora and fauna alike, have wreaked havoc on native populations and weakened deltaic processes. - Stakeholder coordination: with dozens of local, state, and federal government agencies and nonprofit organizations, it is nearly impossible to make everyone happy. - Limited resources: there is a funding gap between the budget needed to implement a successful strategy and what is expected to be available if the status quo is maintained. While there are multiple methods employed to improve coastal resilience, a core strategy as defined by Louisiana's 2023 Coastal Master Plan is the introduction of sediment. The plan suggests two main alternatives of sediment management, that of the Major Diversions and Dredged Sediment. In this work, these two traditional alternatives are considered, and a new proposed approach is introduced, that of Micro Diversions, a concept developed in prior work by the author. All three approaches are described, analyzed, modeled, and compared against each other to determine which would be the most cost effective and appropriate for investment by coastal stakeholders. The compared metric is Cost Benefit over a 50-year time horizon, calculated using the Life Cycle Cost and Net Benefit variables from each alternative. Inherent in the Systems Engineering approach is that the cost variables consider the time value of money. The Major Diversion variables were taken from the stated goals in the Master Plan. The Dredged Sediment variables were forecasted from historical trends on recently completed and/or approved projects. The Micro Diversion variables were formulated from hydrologic software modeling of a limited system and expanded to compare in size to the other alternatives. At a Cost Benefit of $61,773 per acre, the Major Diversion alternative was evaluated to be a better investment than Dredged Sediment or Micro Diversions ($67,300 and $88,206 respectively). Because coastal conditions can change over time, and that the inputs to these alternatives can likewise change, it is suggested to view solutions with a systems-level approach, with the potential to implement complementary alternatives.