Browsing by Author "Herber, Daniel R., advisor"
Now showing 1 - 6 of 6
Results Per Page
Sort Options
Item Open Access Analysis and control co-design optimization of natural gas power plants with carbon capture and thermal energy storage(Colorado State University. Libraries, 2022) Vercellino, Roberto, author; Herber, Daniel R., advisor; Bandhauer, Todd M., advisor; Quinn, Jason C., committee member; Coburn, Timothy C., committee memberIn this work, an optimization model was constructed to help address important design and operation questions for a novel system combining natural gas power plants (NGCC) with carbon capture (CC) and hot and cold thermal energy storage (TES) units. The conceptualization of this system is motivated by the expected evolution of the electricity markets towards a carbon-neutral electricity grid heavily penetrated by renewable energy sources, resulting in highly variable electricity prices and demand. In this context, there will be an opportunity for clean, flexible, and cheap fossil fuel-based generators, such as NGCC plants with CC, to complement renewable generation. However, while recent work has demonstrated that high CO2 rates are achievable, challenges due to high capital costs, flexibility limitations, and the parasitic load imposed by CC systems onto NGCC power plants have so far prevented its commercialization. Coupling TES units with CC and NGCC would allow to store thermal energy into the TES units when the electricity prices are low, either by subtracting it from the NGCC or by extracting it from the grid, and to discharge thermal power at peak prices, from the hot storage (HS) to offset the parasitic load of the CC system and from the cold storage (CS) for chilling the inlet of the NGCC combustion turbine and increase the output of the cycle beyond nominal value. For the early-stage engineering studies investigating the feasibility of this novel system, a control co-design (CCD) approach is taken where key plant sizing decisions (including storage capacities and energy transfer rates) and operational control (e.g., when to store and use thermal energy and operate the power plant) are considered in an integrated manner using a simultaneous CCD strategy. The optimal design, as well as the operation of the system, are determined for an entire year (either all-at-once or through a moving prediction horizons strategy) in a large, sparse linear optimization problem. The results demonstrate both the need for optimal operation to enable a fair economic assessment of the proposed system as well as optimal sizing decisions due to sensitivity to a variety of scenarios, including different market conditions, site locations, and technology options. After detailed analysis, the technology shows remarkable promise in that it outperforms NGCC power plants with state-of-the-art CC systems in many of the scenarios evaluated. The best overall TES technology solution relies on cheap excess grid electricity from renewable sources to charge the TES units -- the HS via resistive heating and the CS through an ammonia-based vapor compression cycle. Future enhancements to the optimization model are also discussed, which include additional degrees of freedom to the CC system, adapting the model to evaluate other energy sources and storage technologies, and considering uncertainty in the market signals directly in the optimization model.Item Unknown Applying model-based systems engineering in search of quality by design(Colorado State University. Libraries, 2022) Miller, Andrew R., author; Herber, Daniel R., advisor; Bradley, Thomas, committee member; Miller, Erika, committee member; Simske, Steve, committee member; Yalin, Azer P., committee memberModel-Based System Engineering (MBSE) and Model-Based Engineering (MBE) techniques have been successfully introduced into the design process of many different types of systems. The application of these techniques can be reflected in the modeling of requirements, functions, behavior, and many other aspects. The modeled design provides a digital representation of a system and the supporting development data architecture and functional requirements associated with that architecture through modeling system aspects. Various levels of the system and the corresponding data architecture fidelity can be represented within MBSE environment tools. Typically, the level of fidelity is driven by crucial systems engineering constraints such as cost, schedule, performance, and quality. Systems engineering uses many methods to develop system and data architecture to provide a representative system that meets costs within schedule with sufficient quality while maintaining the customer performance needs. The most complex and elusive constraints on systems engineering are defining system requirements focusing on quality, given a certain set of system level requirements, which is the likelihood that those requirements will be correctly and accurately found in the final system design. The focus of this research will investigate specifically the Department of Defense Architecture Framework (DoDAF) in use today to establish and then assess the relationship between the system, data architecture, and requirements in terms of Quality By Design (QbD). QbD was first coined in 1992, Quality by Design: The New Steps for Planning Quality into Goods and Services [1]. This research investigates and proposes a means to: contextualize high-level quality terms within the MBSE functional area, provide an outline for a conceptual but functional quality framework as it pertains to the MBSE DoDAF, provides tailored quality metrics with improved definitions, and then tests this improved quality framework by assessing two corresponding case studies analysis evaluations within the MBSE functional area to interrogate model architectures and assess quality of system design. Developed in the early 2000s, the Department of Defense Architecture Framework (DoDAF) is still in use today, and its system description methodologies continue to impact subsequent system description approaches [2]. Two case studies were analyzed to show proposed QbD evaluation to analyze DoDAF CONOP architecture quality. The first case study addresses the analysis of DoDAF CONOP of the National Aeronautics and Space Administration (NASA) Joint Polar Satellite System (JPSS) ground system for National Oceanic and Atmospheric Administration (NOAA) satellite system with particular focus on the Stored Mission Data (SMD) mission thread. The second case study addresses the analysis of DoDAF CONOP of the Search and Rescue (SAR) navel rescue operation network System of Systems (SoS) with particular focus on the Command and Control signaling mission thread. The case studies help to demonstrate a new DoDAF Quality Conceptual Framework (DQCF) as a means to investigate quality of DoDAF architecture in depth to include the application of DoDAF standard, the UML/SysML standards, requirement architecture instantiation, as well as modularity to understand architecture reusability and complexity. By providing a renewed focus on a quality-based systems engineering process when applying the DoDAF, improved trust in the system and data architecture of the completed models can be achieved. The results of the case study analyses reveal how a quality-focused systems engineering process can be used during development to provide a product design that better meets the customer's intent and ultimately provides the potential for the best quality product.Item Unknown Characterizing and improving the adoption rate of model-based systems engineering through an application of the Diffusion of Innovations theory(Colorado State University. Libraries, 2024) Call, Daniel R., author; Herber, Daniel R., advisor; Aloise-Young, Patricia, committee member; Conrad, Steven, committee member; Shahroudi, Kamran Eftekhari, committee memberAs the environment and operational context of new systems continue to evolve and become increasingly complex, the practice of systems engineering (SE) must adapt accordingly. A great deal of research and development has gone and continues to go into formulating and maturing a model-based approach to SE that addresses many of the shortcomings of a conventional, document-based SE approach. In spite of the work that has been done to advance the practice of model-based systems engineering (MBSE), it has not yet been adopted to a level that would be expected based on its demonstrated benefits. While research continues into even more effective MBSE approaches, there is a need to ascertain why extant MBSE innovations are not being adopted more widely, and if possible, determine a way to accelerate its adoption. This outcome is particularly important as MBSE is a key enabler to an agile systems engineering (ASE) approach that satisfies the desire of many stakeholders to apply agile principles to SE processes. The diffusion of innovations (DoI) theory provides a useful framework for understanding the factors that affect the adoption rate of innovations in many fields. This theory has not only been effective at explaining why innovations are adopted but has also been able to explain why objectively superior innovations are not adopted. The DoI theory is likely to provide insight into the factors that are depressing the adoption rate of MBSE. Despite prior efforts in the SE community to promote MBSE, the DoI theory has not been directly and deliberately applied to understand what is preventing widespread MBSE adoption. Some elements of the theory appear in the literature addressing MBSE adoption challenges without any recognition of awareness of the theory and its implications. The expectation is that harnessing the insights offered by this theory will lead to MBSE presentation and implementation strategies that will increase its use. This would allow its benefits to be more widely realized in the SE community and improve the practice of SE generally to address modern, complex environments. The DoI theory has shown that the most significant driver of adoption rate variability is the perceived attributes of the innovation in question. A survey is a useful tool to discover the perceptions of potential adopters of an innovation. The primary contribution of this research is the development of a survey to capture and assess a participant's perceptions of specified attributes of MBSE, their current use of MBSE, and some limited demographic information. This survey was widely distributed to gather data on current perceptions of MBSE in the SE community. Survey results highlighted that respondents recognize the relative advantage of MBSE in improving data quality and traceability, but perceived complexity and compatibility with existing practices still present barriers to adoption. Subpopulation analysis reveals that those who are not already involved in MBSE efforts face the additional adoption obstacles of limited trial opportunities and tool access (chi-squared test of independence between these populations resulted in p = 0.00). The survey underscores the potential for closer alignment between MBSE and existing SE methodologies to improve the perceived compatibility of MBSE. Targeted actions are proposed to address these barriers to adoption. These targeted actions include improving the availability and use of reusable model elements to expedite system model development, improved tailoring of MBSE approaches to better suit organizational needs, an increased emphasis on ASE, refining MBSE approaches to reduce the perceived mental effort required, a lowering of the barrier to entry for MBSE by improving access to the resources (tool, time, and training) required to experiment with MBSE, and increased efforts to identify and execute relevant MBSE pilot projects. The lessons and principles from the DoI theory should be applied to take advantage of the opportunity afforded by the release of SysML v2 to reframe perceptions of MBSE. Future studies would benefit from examining additional variables identified by the DoI theory, incorporating control questions to differentiate between perceptions of SE generally and MBSE specifically, identifying better methods to assess current MBSE use by participants, and measures to broaden the participant scope.Item Open Access Development and quasi-experimental study of the Scrum model-based system architecture process (sMBSAP) for agile model-based software engineering(Colorado State University. Libraries, 2023) Huss, Moe, author; Herber, Daniel R., advisor; Borky, John M., advisor; Miller, Erika, committee member; Mallette, Paul, committee memberModel-Based Systems Engineering (MBSE) is an architecture-based software development approach. Agile, on the other hand, is a light system development approach that originated in software development. To bring together the benefits of both approaches, this research is divided into two stages. The first stage proposes an integrated Agile MBSE approach that adopts a specific instance of the Agile approach (i.e., Scrum) in combination with a specific instance of an MBSE approach (i.e., Model-Based System Architecture Process — "MBSAP") to create an Agile MBSE approach called the integrated Scrum Model Based System Architecture Process (sMBSAP). The proposed approach was validated through an experimental study that developed a health technology system over one year, successfully producing the desired software product. This work focuses on determining whether the proposed sMBSAP approach can deliver the desired Product Increments with the support of an MBSE process. The interaction of the Product Development Team with the MBSE tool, the generation of the system model, and the delivery of the Product Increments were observed. The results showed that the proposed approach contributed to achieving the desired system development outcomes and, at the same time, generated complete system architecture artifacts that would not have been developed if Agile had been used alone. Therefore, the first contribution of this stage lies in introducing a practical and operational method for merging Agile and MBSE. In parallel, the results suggest that sMBSAP is a middle ground that is more aligned with federal and state regulations, as it addresses the technical debt concerns. The second stage of this research compares Reliability of Estimation, Productivity, and Defect Rate metrics for sprints driven by Scrum versus sMBSAP through the experimental study in stage 1. The quasi-experimental study conducted ten sprints using each approach. The approaches were then evaluated based on their effectiveness in helping the Product Development Team estimate the backlog items they can build during a time-boxed sprint and deliver more Product Backlog Items (PBI) with fewer defects. The Commitment Reliability (CR) was calculated to compare the Reliability of Estimation with a measured average Scrum-driven value of 0.81 versus a statistically different average sMBSAP-driven value of 0.94. Similarly, the average Sprint Velocity (SV ) for the Scrum-driven sprints was 26.8 versus 31.8 for the MBSAP-driven sprints. The average Defect Density (DD) for Scrum-driven sprints was 0.91, while that of sMBSAP-driven sprints was 0.63. The average Defect Leakage (DL) for Scrum-driven sprints was 0.20, while that of sMBSAP-driven sprints was 0.15. The t-test analysis concluded that the sMBSAP-driven sprints were associated with a statistically significant larger mean CR, SV , DD, and DL than that of the Scrum-driven sprints. The overall results demonstrate formal quantitative benefits of an Agile MBSE approach compared to Agile alone, strengthening the case for considering Agile MBSE methods within the software development community. Future work might include comparing Agile and Agile MBSE methods using alternative research designs and further software development objectives, techniques, and metrics. Future investigations may also test sMBSAP with non-software systems to validate the methodology across other disciplines.Item Open Access Integrating geometric deep learning with a set-based design approach for the exploration of graph-based engineering systems(Colorado State University. Libraries, 2024) Sirico, Anthony, Jr., author; Herber, Daniel R., advisor; Chen, Haonan, committee member; Simske, Steven, committee member; Conrad, Steven, committee memberMany complex engineering systems can be represented in a topological form, such as graphs. This dissertation introduces a framework of Graph-Set-Based Design (GSBD) that integrates graph-based techniques with Geometric Deep Learning (GDL) within a Set-Based Design (SBD) approach to address graph-centric design problems. We also introduce Iterative Classification (IC), a method for narrowing down large datasets to a subset of more promising and feasible solutions. When we combine the two, we have IC-GSBD, a methodological framework where the primary goal is to effectively and efficiently seek the best-performing solutions with lower computational costs. IC-GSBD is a method that employs an iterative approach to efficiently narrow down a graph-based dataset containing diverse design solutions to identify the most useful options. This approach is particularly valuable as the dataset would be computationally expensive to process using other conventional methods. The implementation involves analyzing a small subset of the dataset to train a machine-learning model. This model is then utilized to predict the remaining dataset iteratively, progressively refining the top solutions with each iteration. In this work, we present two case studies demonstrating this method. In the first case study utilizing IC-GSBD, the goal is the analysis of analog electrical circuits, aiming to match a specific frequency response within a particular range. Previous studies generated 43,249 unique undirected graphs representing valid potential circuits through enumeration techniques. However, determining the sizing and performance of these circuits proved computationally expensive. By using a fraction of the circuit graphs and their performance as input data for a classification-focused GDL model, we can predict the performance of the remaining graphs with favorable accuracy. The results show that incorporating additional graph-based features enhances model performance, achieving a classification accuracy of 80% using only 10% of the graphs and further subdividing the graphs into targeted groups with medians significantly closer to the best and containing 88.2 of the top 100 best-performing graphs on average using 25% of the graphs.Item Unknown Some efficient open-loop control solution strategies for dynamic optimization problems and control co-design(Colorado State University. Libraries, 2021) Sundarrajan, Athul Krishna, author; Herber, Daniel R., advisor; Cale, James, committee member; Venayagamoorthy, Karan, committee memberThis thesis explores strategies to efficiently solve dynamic optimization (DO) and control codesign (CCD) problems that arise in early-stage system design studies. The task of design optimization of dynamic systems involves identifying optimal values of the physical elements of the system and the inputs to effectively control the dynamic behavior of the system to achieve peak performance. The problem becomes more complex when designing multidisciplinary systems, where the coupling between disciplines must be accounted for to achieve optimal performance. Developing tools and strategies to efficiently and accurately solve these problems is needed. Conventional design practices involve sequentially optimizing the plant parameters and then identifying a control scheme for the given plant design. This sequential design procedure does not often produce system-level optimal solutions. Control co-design or CCD is a design paradigm that seeks to find system-level optimal design through simultaneous optimization of the plant and control variables. In this work, both the plant and controls optimization are framed as a integrated DO problem. We focus on a class of direct methods called direct transcription (DT) to solve these DO problems. We start with a subclass of nonlinear dynamic optimization (NLDO) problems for the first study, namely linear-quadratic dynamic optimization problems (LQDO). For this class of problems, the objective function is quadratic, and the constraints are linear. Highly efficient and accurate computational tools have been developed for solving LQDO problems on account of their linear and quadratic problem elements. Their structure facilities the development of automated solvers. We identify the factors that enable creating these efficient tools and leverage them towards solving NLDO problems. We explore three different strategies to solve NLDO problems using LQDO elements, and analyze the requirements and limits of each approach. Though multiple studies have used one of the methods to solve a given CCD problem, there isa lack of investigations identifying the trade-offs between the nested and simultaneous CCD, two commonly used methods. We build on the results from the first study and solve a detailed active suspension design using both the nested and simultaneous CCD methods. We look at the impact of derivative methods, tolerance, and the number of discretization points on the solution accuracy and computational times. We use the implementation and results from this study to form some heuristics to choose between simultaneous and nested CCD methods. A third study involves CCD of a floating offshore wind turbine using the levelized cost of energy (LCOE) as an objective. The methods and tools developed in the previous studies have been applied toward solving a complex engineering design problem. The results show that the impact of optimal control strategies and the importance of adopting an integrated approach for designing FOWTs to lower the LCOE.