Theses and Dissertations
Permanent URI for this collectionhttps://hdl.handle.net/10217/199889
Browse
Recent Submissions
Item Embargo Analysis of frequency control and grid storage effectiveness for a west African interconnected transmission system(Colorado State University. Libraries, 2025) Abayateye, Julius, author; Bradley, Thomas, advisor; Zimmerle, Dan, advisor; Young, Peter, committee member; Burkhardt, Jesse, committee memberThe West Africa Power Pool (WAPP) Interconnected Transmission System (WAPPITS) has faced challenges with frequency control due to limited primary frequency control reserves (PFR). Battery Energy Storage Systems (BESS) have been identified as a possible solution to address frequency control challenges and to support growing levels of variable renewable energy in the WAPPITS. This dissertation examines existing frequency control challenges in the West African Power Pool Interconnected Transmission System and evaluates the effectiveness of Battery Energy Storage Systems (BESS) as a solution to enhance grid stability and resilience amid growing ambitions to increase variable renewable energy (VRE) penetration. To carry out this assessment, three studies were conducted - the first study assesses the effectiveness of BESS in providing primary frequency reserves (PFR) using open-loop simulations based on real WAPPITS frequency data. Results from this study suggest that droop-based BESS control strategies can mitigate fast frequency variations. In addition, it demonstrates that integrating BESS alone into the grid will not solve the frequency control challenges in WAPPITS, requiring the need for a revision of frequency control provision, including mandatory participation of traditional power plants in the provision of the service. The second study investigates primary and secondary frequency control challenges in WAPPITS using surveys from Transmission System Operators, field tests on power plants as well as analysis of events in the grid. Results reveal critical challenges: inadequate PFR reserves, reliance on under-frequency load shedding, and a lack of automatic secondary frequency control via automatic generation control (AGC). The study recommends (1) enforcing mandatory PFR compliance and (2) establishing an ancillary services market to incentivize reserve provision. The third study uses PSS/E dynamic simulations to assess primary frequency response provision using different mixes of BESS and conventional generation in responding to the maximum N-1 contingency (400MW loss). Simulation results suggest that BESS -only PFR provision outperforms conventional generation-only PFR in fast frequency response across the frequency metrics analyzed. However, a hybrid mix of BESS and conventional reserves achieves adequate performance on all metrics and is more cost effective. The research demonstrates that BESS can significantly improve frequency stability in WAPPITS, but to successfully achieve this, there is need for technical and regulatory reforms, including: • Mandatory PFR participation for conventional plants, • Ancillary services markets to mobilize reserves, and • Implementation of hybrid PFR provision by BESS and conventional power plants. This research provides policy makers and technical experts with insights to guide the implementation of frequency control service provision, underscoring the need for institutional and market reforms coupled with technological innovations to solve the existing frequency control challenges in WAPPITS.Item Open Access Bootstrapping a trustworthy and seamless digital engineering appliance(Colorado State University. Libraries, 2025) Wheaton, James S., author; Herber, Daniel R., advisor; Simske, Steven J., committee member; Gallegos, Erika E., committee member; Prabhu, Vinayak S., committee memberDigital engineering is an organizational effort that currently relies on the complex networked integration of heterogeneous computer hardware and software components to maintain a cohesive digital model of the system-of-interest: its Authoritative Source of Truth. The unfortunate truth is that these computer components contain myriad known and unknown defects and their many interfaces cause severe fragility, adding significant cost, risk, and schedule to projects and to the information technology infrastructure that supports them. In this situation, additional resources must be allocated to defect remediation and the gluing of software and data components together with ad-hoc solutions, or to rely on expensive third-party solutions that further accrete the infrastructure. While interoperability by means of Application Programming Interfaces exists in islands of support and is offered by, e.g. the Systems Modeling Language version 2, as the way forward, the foundations upon which these software-intensive systems are built are nevertheless untrustworthy and critically vulnerable. This dissertation argues that clean-slate approach is necessary to address this mess: a system of interrelated problems. The needs of digital engineering stakeholders motivate a computing system architecture that guarantees consistency and coherence, is independently auditable, and is trustworthy based on strong evidence gathered from a full-source bootstrap and end-to-end formal verification, to achieve correctness-by-construction of itself and of the systems-of-interest it is employed to specify. Such a cyber-system, specifically designed for digital engineering activities, is termed a seamless digital engineering information appliance, and represents a grand challenge in digital engineering research. By applying a transdisciplinary systems engineering approach to this problem space, the assured preservation of bidirectional traceability of the stakeholder needs and requirements, detailed design specifications, and verification proof certificates becomes achievable. A systems approach to human factors and security will then result in a high-assurance, malleable human-computer interface with fine-grained security controls suited to the needs of digital engineering. Seamless Digital Engineering is defined as a digital engineering tooling paradigm, contrasted with existing digital engineering integration patterns, and characterized with the seamless integration pattern and a set of architecture tenets to guide surveys of the solutions space. Rationale of the grand challenge is presented. The natural language definition is further clarified using the expressive power of formal ontologies, resulting in ontological definitions-by-relations based on relevant systems and software engineering standards. The concept of seamless is disambiguated using the SQuaRE product quality model, separating it into seamless integration and seamless interaction capability quality characteristics, and seamless quality-in-use characteristics. The Seamless Digital Engineering Ontology includes over 500 concepts and is published open-source in a standard machine-readable format. An open-source SysML profile for digital requirements engineering is presented and validated in real-world projects, representing the preferred model-based technique for developing requirements in the seamless digital engineering context. Finally, the Seamless Digital Engineering Reference Architecture defined in SysML v2 is presented, which captures essential digital engineering stakeholder goals, objectives, and needs. This reference architecture specifies multiple trustworthy bootstrap paths for the proposed seamless digital engineering appliance, with the explicit goal of bootstrapping a powerful, high-assurance digital engineering meta-language. Together, these open-source models form the basis of understanding the grand challenge so that a detailed definition of the Seamless Digital Engineering Reference Architecture can proceed.Item Open Access A holistic multidisciplinary decision-making approach utilizing model-based systems engineering and system dynamics for novel energy technology commercialization(Colorado State University. Libraries, 2025) Lawrence, Svetlana, author; Herber, Daniel R., advisor; Shahroudi, Kamran Eftekhari, committee member; Bradley, Thomas H., committee member; Barbier, Edward B., committee memberThe U.S. energy system is characterized by its complexity and the intricate interplay of various components, including electricity generation, non-electrical energy sources, energy consumption patterns, and the energy economy. As the nation transitions to more sustainable and resilient energy sources, it becomes evident that traditional decision-making approaches are insufficient to address the multifaceted challenges of modern energy systems. This research aims to develop a novel decision-making framework by integrating systems thinking and systems engineering principles to provide a comprehensive understanding of energy system behavior and facilitate the evaluation and deployment of novel energy technologies. Chapter 1 provides an overview of the research, states the main research question and objectives, and describes an overview of the dissertation. Chapter 2 presents an overview of the intricate and multifaceted landscape of the U.S. energy system, exploring its various elements and the complex interactions among them. It provides a comprehensive overview of the current state of the U.S. energy system, including electricity generation, non-electrical energy sources, and energy consumption patterns. The chapter also highlights the critical role of the energy economy in shaping the transition to sustainable and resilient energy sources. Furthermore, the chapter examines the potential of hydrogen as a key player in the future energy system, emphasizing its ability to enhance energy security, reduce carbon emissions, and support diverse industrial applications. Finally, the chapter discusses the challenges and shortcomings of existing decision-making approaches for complex energy systems, underscoring the need for new methodologies that integrate multidisciplinary insights and address uncertainties. Chapter 3 elaborates on the complexity of energy systems and underscores the importance of interdisciplinary approaches to address their challenges. It highlights the roles of systems thinking and systems engineering in developing a novel decision-making framework for energy systems. Systems thinking is presented as a holistic approach that considers both internal and external interactions of system elements, enabling better decision-making by providing insights into complex interactions and long-term perspectives. Systems engineering is defined as an interdisciplinary approach that ensures the successful realization of complex systems by connecting various engineering disciplines, evaluating stakeholder needs, and applying standardized methods throughout the system life cycle. The chapter also discusses specific methods and tools, such as system dynamics and model-based systems engineering, that are used in this research to develop a framework for informed decision-making in energy systems. Chapter 4 explores the deployment dynamics of novel energy technologies, focusing on onshore wind, utility-scale solar photovoltaic, and clean hydrogen generation energy systems. The research examines various factors influencing deployment, including policy and regulation, technological advancements, economic considerations, environmental concerns, public perception, and infrastructure capabilities. Qualitative analysis identifies key dynamics such as the role of government policies and incentives, technological advancements, economic factors, environmental concerns, and public perception in accelerating technology adoption. Quantitative modeling provides insights into factors driving capacity growth and cost reductions, demonstrating the model's ability to simulate the trajectory of novel energy technology adoption. Sensitivity studies highlight the importance of resource availability, willingness to invest, and technological learning as influential factors affecting capacity growth. Scenario analyses confirm the significant impact of federal incentives and technological learning on both capacity growth, the levelized cost of energy, and the levelized cost of hydrogen. Chapter 5 expands the exploration of energy system deployment presented in Chapter 4 into a more granular problem—the crafting of a decision support framework aimed at configuring energy systems on a smaller scale. The principal objective is to leverage systems engineering principles and tools systematically to minimize the risk of suboptimal system configurations that fail to align with stakeholder requirements or regional conditions, potentially resulting in reduced or lost profits. The need for this new approach is underscored by the inherent complexity and uncertainty in energy systems, which necessitates a structured, multidisciplinary evaluation method to facilitate high-level decision-making and ensure the selection of the most feasible and beneficial system concepts. Finally, Chapter 6 presents conclusions, research contributions, and opportunities for future work. The findings from this research have several implications for policymakers, investors, and industry stakeholders. Policymakers are encouraged to maintain consistent and supportive government policies and incentives to reduce market volatility and encourage sustained investment in renewable energy projects. Investors can benefit from understanding the dynamics of technology adoption and the factors influencing profitable capacity, emphasizing the significance of technological learning and cost reductions. Industry stakeholders should focus on scaling up developer capacity and investing in technological improvements, collaborating with policymakers to ensure supportive regulatory environments and incentives. In summary, the transition to novel energy technologies is a complex but essential process in addressing climate change and ensuring energy security. This research highlights the critical factors influencing this transition and provides a robust model for understanding the dynamics of energy technology adoption. By leveraging these insights, stakeholders can make informed decisions to support the accelerated deployment of renewable energy systems, contributing to a sustainable and resilient energy future.Item Open Access A model-based system for on-premises software-defined infrastructure(Colorado State University. Libraries, 2025) Enos, Eric S., author; Herber, Daniel R., advisor; Conrad, Steven A., committee member; Shahroudi, Kamran Eftekhari, committee member; Gallegos, Erika E., committee member; Mangal, Ravi, committee memberThis dissertation develops and evaluates a novel framework for the adoption of on-premises software-defined infrastructure (SDI) within large, skill-based IT organizations. Focusing on a case study of a major US healthcare provider, the research investigates whether cloud-inspired automation techniques commonly associated with DevOps can deliver meaningful benefits in environments heavily reliant on traditional, on-premises technologies. First, a hybrid simulation approach — integrating System Dynamics and Discrete Event Simulation — depicts both project-based tasks and unscheduled operational work within the case study organization. The findings suggest that automating high-volume or time-critical processes can reduce queuing, shorten response times, and lower error rates by addressing the unique constraints that arise when teams of mixed skill levels must simultaneously manage both project deliverables and incident-driven activities. Subsequently, the dissertation applies model-based systems engineering (MBSE) to guide the systematic design of an on-premises SDI management system. Using the Systems Modeling Language (SysML), a reference architecture is defined that outlines the orchestration, code management, and integrations required to enable a unified, programmable environment across servers, storage, and network resources. This architecture leverages existing tools and hardware investments, providing a cohesive layer through which code-driven automation can be deployed and maintained. Finally, a phased implementation roadmap is proposed in tandem with a quantitative business-case analysis. The recommended approach advocates incremental adoption, beginning with tasks that benefit most from automated provisioning and event-driven response. Taken together, this research offers a practical blueprint for healthcare and similarly structured organizations seeking to modernize their IT environments, enhance operational efficiencies, and harmonize DevOps methodologies with existing on-premises systems and management practices.Item Open Access Human-guided, AI-accelerated system dynamics via pipeline algebra(Colorado State University. Libraries, 2025) Reinholtz, Kirk, author; Shahroudi, Kamran Eftekhari, advisor; Aloise-Young, Patricia, committee member; Simske, Steve, committee member; Troxell, Wade, committee memberSystem Dynamics (SD) gives Systems Engineers (SEs) and Model-based Systems Thinking (MBST) practitioners in general a rigorous way to reason about feedback-rich problems, yet adoption remains low because assembling a causal-loop diagram (CLD), validating its logic, and converting it into an executable model are still labor-intensive and require specialized skills. Human analysts remain central to judgment, but they should spend their time on insight rather than tool wrangling. This dissertation demonstrates that chat-based large language model (LLM) pipelines can remove that bottleneck, automating polarity-reversal checks, loop-dominance mapping, latent embeddings that capture joint structure-behavior signatures for similarity search, missing-loop discovery, and first-cut model synthesis, thereby lowering the entry barrier for SEs and cutting typical SD turnaround from days to minutes. Powered by ChatGPT o3, a transformer pre-trained on internet-scale corpora of prose, source code, and mathematical notation, a single interactive session can (i) read narrative text, (ii) propose syntactically complete CLDs, (iii) diagnose structural anomalies, and (iv) translate diagrams into executable SD code. The pipeline then runs an LLM-generated simulator and, through chain-of-thought prompting, iteratively tunes loop structure and parameters until simulated behavior reproduces a reference mode that the same LLM mined from prose and refined via targeted web search. These feats rest on three enablers: cross-modal pattern learning that maps language to graph and code representations; chain-of-thought prompts that force the model to externalize intermediate reasoning; and an agentic, simulator-in-the-loop refinement cycle that tests and revises its own drafts. The full loop finishes in minutes, far faster than manual workflows, and while domain judgment is still decisive at checkpoints, no specialized SD software expertise is required. Three studies validate the approach. First, the pipeline outperformed forty-three graduate students and matched an instructor benchmark when extracting Janis's groupthink CLD, finishing each run in under fifteen minutes. Second, it converted qualitative CLDs into quantitative executable SD model simulators in minutes through expert-in-the-loop refinement. Third, symbolic routines with no LLM involvement computed generalized loop sets for 678 models and clustered 59K equations from a 1K-model corpus; by clarifying how feedback structures share influence across behavior modes, loop sets provide SEs a principled aid to loop dominance comprehension and are slated for integration with the LLM toolkit. Every transformation, whether an LLM invocation, a symbolic routine, or a shell command, is expressed in Pipeline Algebra (PA), a typed workflow language that serves as an executable notation for thought, records explicit pre- and post-conditions, supports deterministic replay, and aligns naturally with meta-algorithmic control. Forthcoming primitive operators, exposed as terminal functions through the OpenAI functions API, will let GPT itself invoke loop-set analytics, polarity-reversal detection, and higher-order transformations such as map and comap, unifying them within the same declarative fabric and steering control from bespoke orchestration code toward the language model, thereby laying the groundwork for self-optimizing model laboratories that combine formal mathematics with AI-guided pattern discovery. By combining LLM automation with a rigorous workflow backbone, the approach lets SEs exploit SD with far less overhead and far greater throughput. By shifting the analytic burden from tool wrestling to insight building, it invites a wider pool of engineers and decision makers to deploy feedback-based modeling, an advance that can sharpen climate-mitigation policy and other responses to feedback-driven challenges.Item Open Access Yet another MBSE methodology (YAMM): a DevOps and empirically-based unified-modeling, design, analysis, and optimization methodology for software-centric data systems(Colorado State University. Libraries, 2025) Booth, Thomas M., author; Ghosh, Sudipto, advisor; Herber, Daniel, committee member; Blanchard, Nathaniel, committee member; Vijayasarathy, Leo, committee memberData systems consist of a network of communication channels, software that processes, creates, or transmits data across these channels, and the hardware that runs these applications and generates data. Software-centric data systems rely on software to define system behavior, security, and other capabilities. To keep up with rapidly shifting system requirements, many successful organizations utilize software Development and Operations (DevOps) principles to increase software quality and throughput of new software capabilities. Many organizations have modern data systems that include cloud resources and 49% of these organizations lack a Financial Operations (FinOps) team for cloud management and optimization. A third of these organizations spend more than $12 million/year on cloud costs and more than half are regularly 18% over their annual cloud budget. Similar resource management issues exist for aircraft avionics data systems which also cost organizations millions of dollars a year. Adopting a practical system analysis and optimization methodology is one solution to these organizational budget issues. Model-Based Systems Engineering (MBSE) enables the design, analysis, and optimization of complex systems and has been in practice since the late 20th century. The Systems Modeling Language (SysML) is currently the most common graphical specification language used with MBSE methodologies. Unfortunately, there is insufficient research into MBSE and SysML that show a positive Return On Investment (ROI) for the design, development, sustainment, and optimization of software-centric data systems. Additionally, to keep up with the rapid deployment of software capabilities, a practical MBSE methodology and modeling approach for data systems would need to include DevOps principles. We present Yet Another MBSE Methodology (YAMM), a novel and practical MBSE methodology towards the design, development, optimization, and sustainment of software-centric data systems. YAMM has refined and extended the Harmony agile MBSE process (Harmony aMBSE) to provide a more prescriptive and tailored methodology to solve data system specific issues. We also present our novel Unified Modeling Approach (UMA) which combines graphical-based system specifications and simulations into a single, empirically-derived system model which accurately predicts data system resource usage, cost, and performance. Together, YAMM and UMA integrate MBSE with DevOps and FinOps principles to improve data system performance and reduce costs using continuous and empirically-driven feedback throughout the system acquisition and sustainment phases. We demonstrate and evaluate YAMM and UMA using case studies of two different types of data systems that include operationally relevant empirical data. The first case study implemented the YAMM framework to enable the training, selection, and analysis of a sensor fusion Machine Learning (ML) model for a legacy aircraft sensor data system. Results show the value of using compute, memory, and networking resource consumption in addition to performance and accuracy as evaluation criteria for ML model analysis and selection. YAMM and UMA are demonstrated on the second case study towards the design, development, optimization, and sustainment of a hybrid cloud analytical data system prototype in Amazon Web Service (AWS). The non-optimized prototype system had a quadratically increasing cost with 1 and 5 year cost estimates of $1.9M and $27.7M. Results showed that the unified model had a Root Mean Squared Error (RMSE) of $865 when compared against AWS cost data. The Pareto-optimal design showed a positive YAMM ROI after 361 days with an estimated 5 year savings of $12.7M without reduction in system performance. Our UMA is limited to systems without physics-based or entity-based simulation requirements. However, results show that implementing YAMM with UMA can reduce costs, increase performance, and provide insights into dominant system characteristics for the design, development, optimization, and sustainment of software-centric data systems.Item Open Access Engineering of intelligent systems for sustainable cement manufacturing(Colorado State University. Libraries, 2025) Oguntola, Olurotimi, author; Simske, Steve, advisor; Shahroudi, Kamran Eftekhari, committee member; Gallegos, Erika, committee member; Ortega, Francisco, committee memberCement-based materials have been used for urban development from historic times, remain important till the present day, and will be required for construction in the foreseeable future. However, cement manufacturing by its nature is carbon-intensive and consumes a lot of energy. The cement industry faces significant challenges in implementing sustainable practices and reducing its environmental footprint. Carbon dioxide emissions from global cement production have increased at a higher rate than cement production rates. While traditional carbon reduction efforts have focused on thermal energy use in the calcination process of cement manufacturing, electrical energy consumption represents a substantial but often overlooked opportunity for sustainability improvements. This dissertation employs a systems engineering approach to address this gap by developing intelligent systems for sustainable cement manufacturing with a focus on decarbonization through electrical energy consumption optimization. Through systematic review of research published between 1993-2023, life cycle assessment, and techno-economic analysis, this study demonstrates that substantial environmental and economic benefits can be achieved through innovative approaches. Analysis of four scenarios from a combination of two cement types (ordinary Portland cement, Portland-limestone cement) and two energy sources for thermal heating (coal, dried biosolids) indicates that increased production and adoption of Portland-limestone cement with up to 15% limestone can reduce carbon footprints by 6.4%, while using dried biosolids as combustion fuel can yield a 7.9% emission reduction compared to baseline. More significantly, the application of a memory-efficient hybrid variant of Causal Bayesian Optimization (CBO) to raw meal grinding indicates potential specific electrical energy consumption reductions of 26.7%. The study also introduces an IoT-inspired deployment framework for continuously assessing environmental and economic impacts and proposes that with Industry 4.0 digitalization and advancements in data analytics, artificial intelligence can extract operational insights from plant sensors and meters. This presents a cost-effective, high-return, and low-risk opportunity to optimize electrical energy consumption in cement manufacturing. By understanding causal relationships between cement plant system components and implementing targeted interventions to optimize electrical energy consumption in the production process, cement manufacturers can significantly contribute to decarbonization efforts, improve sustainability and resource efficiency, and enhance profitability and public image.Item Open Access Accelerating capability to the fleet: rapid fielding of small unmanned surface vehicles(Colorado State University. Libraries, 2025) Phillips, John, author; Gallegos, Erika, advisor; Simske, Steven, committee member; Vans, Marie, committee member; Wise, Dan, committee memberThe urgency to more rapidly field capability is critical to the US Navy's future; particularly as senior naval leadership has challenged the naval enterprise to accelerate fielding of robotics and autonomous systems. However, traditional requirements, resourcing, and acquisition processes often take over a decade to field needed capability. The objective of this dissertation is to describe a novel framework developed to meet this challenge, which is demonstrated using a case study for fielding small Unmanned Surface Vehicles (sUSVs) for the US Pacific Fleet from 2022 to 2025. The framework begins with an adaptation of the innovation pipeline, executing a 12-week sprint process of problem sourcing, curation, discovery, incubation, and scaling. The results of the sprint process provide the case that a solution has warfighting utility, is technically feasible, and has a path to scale. This output is provided to leadership to make an informed decision to transition into a prototype project phase, which aims to continue learning while validating sprint results. The second part of this framework uses the sUSV prototypes to build a campaign of learning, leveraging Fleet experiments while implementing a DevOps model utilizing both government and industry to rapidly learn, adapt, and improve the capability. This step focused on demonstrating warfighting utility, technical feasibility, and building advocacy to the path to scale. The third step implements an in-parallel but collaborative approach to acquisition, systems engineering, and Fleet adoption, leveraging the lessons learned from the previous steps to more rapidly field and employ the sUSV capability at scale. This dissertation provides an overview and lessons learned from a real Navy case study in which time to field capability was reduced significantly. While this may not be applicable to all Navy programs, this research offers insights to help the naval enterprise when challenged to rapidly field capability.Item Open Access An algorithmic semantic analysis of cyber security and resilience guidance against interdisciplinary understanding of resilience concepts across time and scale(Colorado State University. Libraries, 2025) Hilger, Ryan, author; Simske, Steve, advisor; Cross, Jennifer, committee member; Daily, Jeremy, committee member; Ray, Indrakshi, committee memberThis dissertation bridges critical gaps between cybersecurity frameworks and interdisciplinary resilience theory through innovative algorithmic analysis. Rather than pursuing an elusive singular definition of resilience, I employ statistical modeling and machine learning techniques to extract core resilience attributes from a diverse corpus of 102 unique definitions across fields including ecology, psychology, disaster management, and organizational studies. My research addresses two fundamental questions: (1) Does any existing cybersecurity strategy or guidance document comprehensively address resilience across temporal and scalar dimensions? (2) How do current frameworks conceptualize and operationalize resilience? The methodological approach integrates term frequency-inverse document frequency (tf*idf), Latent Dirichlet Allocation, and bidirectional encoder representations from transformers (BERT) algorithms to construct a novel classification scaffold based on time and scale dimensions. This scaffold systematically evaluates 37 cybersecurity frameworks and 12 non-cyber resilience frameworks against core resilience attributes. Results reveal significant gaps between cybersecurity guidance and interdisciplinary resilience concepts, with most frameworks focusing predominantly on technical and sociotechnical aspects while neglecting broader organizational, community, and temporal dimensions of resilience. This research makes several key contributions: (1) establishing a data-driven classification framework for assessing resilience features in guidance documents, (2) demonstrating that no single existing framework adequately addresses resilience across all relevant dimensions, and (3) providing a foundation for developing more comprehensive cyber resilience strategies. The findings offer both theoretical advancement in conceptualizing cyber resilience and practical guidance for organizations seeking to build more adaptable and resilient systems across multiple time horizons and organizational scales.Item Open Access Scalable system architecture for CubeSat test & evaluation for enhanced mission success(Colorado State University. Libraries, 2025) Magone, Laurence Gregory, author; Simske, Steven, advisor; Cale, James, committee member; Herber, Daniel, committee member; Reising, Steven, committee memberCubeSats, as small, low-cost satellites, are the next generation of uncrewed space missions, however, the failure rate of University-class CubeSats is high. The failure rate is a major drawback to relying on CubeSats for scientific exploration of the universe. This research aims to develop a scalable system architecture for CubeSat Test and Evaluation to improve the success rate of CubeSat missions. The research started with a literature review on the topics of Systems Architecture, Test and Evaluation, and CubeSats, and finally closed in on past published literature relating to CubeSat Test and Evaluation systems. The literature identified a gap in the research around scalable system architectures for CubeSat Test and Evaluation and the remainder of this paper closed the gap in previous research. After completing the literature review, the basis for the systems architecture for CubeSat missions and CubeSat systems was developed. The initial basis provided a framework for further discussion on CubeSat Test and Evaluation systems architecture. Next, a survey was conducted of CubeSat Engineers and Engineering Students to ascertain current philosophies towards CubeSat Test and Evaluation. After completing the survey, a time study was conducted on a CubeSat simulator to gather real-life data on the amount of time required to conduct test and evaluation on a CubeSat. Next, a simulation was run to determine the probability of mission failure depending on what type of CubeSat was tested. Finally, the results of the research were plotted on a Pareto diagram where a Pareto front identified the optimal spread of tests on prototype, engineering qualification model, and full flight model CubeSats. Sensitivity analysis was performed, comparing the original optimized solution with four alternates with different inputs. The next step of the research was to use the collected data to develop the scalable systems architecture for CubeSat Test and Evaluation. This was accomplished via a series of model-based systems engineering drawings and diagrams. The final step of the research was to identify four previous satellite development projects that experienced mission failure and apply the scalable systems architecture to those projects to determine if the application of the proposed systems architecture would result in improved mission success. The results showed that in some but not all cases the proposed systems architecture would have improved the success of the mission. Finally, suggestions for future work are presented which included formalized requirements idealization, conducting time studies of testing on actual CubeSats, and extending the research to other industries such as railroad testing, aircraft certification testing, and nuclear submarine testing.Item Open Access Advanced capacity and dispatch co-design for the techno-economic optimization of integrated energy systems(Colorado State University. Libraries, 2025) Gulumjanli, Ziraddin, author; Herber, Daniel R., advisor; Coburn, Timothy C., committee member; Paglioni, Vincent P., committee member; Gaofeng, Jia, committee memberThis thesis explores the techno-economic performance of integrated energy systems by means of a linear optimization framework performed using direct transcription inside the DTQP environment. Over operational and financial time horizons, the model co-optimizes generating and storing technologies to maximize net present value (NPV) under different techno-economic assumptions. The basic dynamics of the subsystems are specified, with a special focus on balancing important physical and financial domains to enable effective decision-making within the framework of capacity and dispatch optimization. Three sample case studies — natural gas with thermal storage, wind power with battery systems, and nuclear energy with hydrogen storage — are thoroughly analyzed in order to extend the basic concept, including sensitivity analysis. To assess their impact on ideal investment and deployment policies, key input parameters such as carbon tax levels, power and fuel prices, and capital and operating expenses are methodically changed. Results show that some factors, such as generator capital expenditures, especially electricity prices and energy prices, have an unusual influence on economic results, while others have little effect at all. These results are presented using scenario-specific outputs, comparison graphs, and trajectory-based insights, providing useful guidance on model robustness and decision-critical assumptions.Item Open Access AI/ML tools for early decision making in water system operations: managing non-stationarity water quality(Colorado State University. Libraries, 2025) Vizarreta Luna, Guillermo Alonso, author; Conrad, Steven, advisor; Arabi, Mazdak, committee member; Grigg, Neil, committee member; Kennan, Alan, committee memberThe assumption that natural systems oscillate within a stationary range of variability has traditionally guided water system management and allowed water utilities to experience steady state operations. These steady state operations are viewed as the 'normal state' of the water system. However, non-stationarity events such as wildfires, droughts, and floods shift water systems to new 'states' that negatively impact water quality and complicate water treatment decision-making and performance. Many water system managers do not account for these variations systemically and instead respond reactively as watersheds shift from perceived normal states. To enhance their operational resilience and develop adaptive and robust methodologies, water utilities must gain knowledge of non-stationarity states. Artificial Intelligence (AI) and its subset, Machine Learning (ML), are emerging as key tools for addressing the impacts of non-stationarity events on water system operations. This thesis responds to this gap by investigating how AI and its subset, ML, are emerging as key tools for addressing the impacts of non-stationarity events on water system operations. It provides a review of how AI supports decision-making in drinking water treatment systems when non-stationary water quality states occur due to perturbations. This study provides a summary and observations on: (1) Understanding the boundary influences on water quality due to non-stationarity events and their implications for drinking water treatment processes. (2) Exploring how AI/ML methods inform stationarity and non-stationarity system state patterns. (3) Applying AI/ML models to develop a TOC predictive tool and assess their potential use to address non-stationarity water quality states.Item Open Access The application of Agile to large-scale, safety-critical, cyber-physical systems(Colorado State University. Libraries, 2025) Yeman, Robin, author; Malaiya, Yashwant, advisor; Adams, Jim, committee member; Simske, Steve, committee member; Herber, Daniel, committee member; Arneson, Erin, committee memberThe increasing complexity of large-scale, safety-critical cyber-physical (LS/SC/CP) systems, characterized by interconnected physical and computational components that must meet stringent safety and regulatory requirements, presents significant challenges to traditional development approaches. Traditional development approaches, such as the waterfall methodology, often struggle to meet adaptability, speed, and continuous assurance demands. This dissertation explores the feasibility of applying and adapting Agile methodologies to LS/SC/CP systems, focusing on challenges like regulatory compliance and rigorous verification, while intending to prove benefits such as improved risk management and faster development cycles. Through case studies and simulations, this research provides empirical validation of Agile's effectiveness in this domain, contributing a framework for adapting Agile practices to meet the unique demands of LS/SC/CP systems. Employing a mixed-methods approach, the research comprises five key components. First, a systematic literature review (SLR) was conducted to assess the current state of Agile adoption in LS/SC/CP environments. Second, a comparative analysis of the top 10 Agile scaling frameworks was performed to evaluate their suitability for LS/SC/CP system development. Third, a survey of 56 respondents provided both quantitative and qualitative insights into industry trends, adoption patterns, and Agile's impact on LS/SC/CPs. Fourth, 25 one-on-one interviews with industry practitioners further explored the challenges, benefits, and enablers of Agile adoption in these environments. Finally, lifecycle modeling (LML) using Innoslate was utilized to develop a fictional case study, modeling the development of a mid-size low Earth orbit (LEO) satellite using both NASA's Waterfall approach (Phase A-D) and an Agile approach with a series of Minimum Viable Products (MVPs). Findings reveal that Agile methodologies, when adapted for LS/SC/CP systems, enable accelerated development cycles, reducing development time by a factor of 2.5 compared to Waterfall while maintaining safety and regulatory compliance. A key contribution of this study is the introduction of a Continuous Assurance Plugin, which integrates continuous validation within Agile's iterative processes, effectively addressing compliance and safety requirements traditionally managed through phase-gated reviews in Waterfall. Additionally, this research provides: 1. Empirical validation of Agile Scaling Frameworks and their suitability for delivering LS/SC/CP systems. 2. Quantitative and qualitative analysis of Agile's current state and impact in LS/SC/CP environments. 3. Evaluation of key enabling technologies such as Model-Based Systems Engineering (MBSE), Digital Twins, and Continuous Integration/Continuous Deployment (CI/CD) that facilitate Agile adoption for LS/SC/CP systems. This dissertation advances the understanding of Agile's role in LS/SC/CP system development, providing actionable insights and practical adaptations for organizations seeking to implement Agile in complex, safety-critical domains.Item Open Access Safeguarding sensitive data: prompt engineering for Gen AI(Colorado State University. Libraries, 2025) Giang, Jennifer, author; Simske, Steven J., advisor; Marzolf, Gregory, committee member; Gallegos, Erika, committee member; Ray, Indrajit, committee memberGenerative Artificial Intelligence (GenAI) represents a transformative advancement in technology with capabilities to autonomously generate diverse content, such as text, images, simulations, and beyond. While GenAI offers significant operational benefits it also introduces risks, particularly in mission-critical industries such as national defense and space. The emergence of GenAI is similar to the invention of the internet, electricity, spacecraft, and nuclear weapons. A major risk with GenAI is the potential for data reconstruction, where AI systems can inadvertently regenerate or infer sensitive mission data, even from anonymized or fragmented inputs. This is relevant today because we are in an AI arms race against our adversaries much like the race to the moon and development of nuclear weapons. Such vulnerabilities pose profound threats to data security, privacy, and the integrity of mission operations with consequences to national security, societal safety and stability. This dissertation investigates the role of prompt engineering as a strategic intervention to mitigate GenAI's data reconstruction risks. By systematically exploring how tailored prompting techniques can influence AI outputs, this research aims to develop a robust framework for secure GenAI deployment in sensitive environments. Grounded in systems engineering principles, the study integrates theoretical models with experimental analyses, assessing the efficacy of various prompt engineering strategies in reducing data leakage, bias, and confabulation. The research also aligns with AI governance frameworks, including the NIST AI Risk Management Framework (RMF) 600-1, addressing policy directives such as Executive Order 14110 on the safe, secure, and trustworthy development of AI. Through mixed-methods experimentation and stakeholder interviews within defense and space industries, this work identifies key vulnerabilities and proposes actionable mitigations. The findings demonstrate that prompt engineering, when applied systematically, can significantly reduce the risks of data reconstruction while enhancing AI system reliability and ethical alignment. This dissertation contributes to the broader discourse on Responsible AI (RAI), offering practical guidelines for integrating GenAI into mission-critical operations without compromising data security. This underscores the imperative of balancing GenAI's transformative potential with the societal need for robust safeguards against its inherent risks.Item Open Access Analysis of a cybersecurity architecture for satellites using model-based systems engineering (MBSE) approaches(Colorado State University. Libraries, 2025) Johnson, Daniel, author; Bradley, Thomas, advisor; Poturalski, Heidi, committee member; Adams, Jim, committee member; Herber, Daniel, committee member; Reising, Steve, committee memberHistorically, satellites have been relatively isolated from cybersecurity threats. However, during the 2020s, cyberattacks on critical ground-based infrastructure became more common and prevalent, and with the increase in technological advancement of peer adversaries, the United States government has come to recognize and define an increasing level of vulnerability in space-based assets as well. This doctoral research seeks to understand and address cybersecurity vulnerabilities inherent in commercial small-scale satellite architectures by demonstrating how model-based systems engineering (MBSE) can enable the design and analysis of a cyber-secure satellite architecture. To determine the cybersecurity vulnerabilities applicable to satellites, a scholarly review of literature on cybersecurity threats and mitigation techniques was performed and applied to satellite systems. The result of this scholarly review is an assessment of the cybersecurity threats applicable to satellites with a particular focus on small satellite architectures, and an understanding of current cybersecurity threat agents and the categories of cyber threats applicable to such satellites. Common architectures and satellite components were analyzed to determine vulnerabilities that could be exploited. The next phase of research then evaluated how industry has applied cybersecurity practices to satellite systems. We were able to determine the gaps which industry currently faces and recommended a set of generic requirements that could help create a cyber-secure satellite from early in the program lifecycle. The final phase of research synthesized the findings from the first two phases to build an MBSE model that integrates cybersecurity engineering and satellite architecture into a singular design process. We also analyzed the benefits to a company of applying the MBSE architectural process, paying particular attention to reusability of the model, cost, and human-centered benefits of committing to MBSE for multiple programs. A finding of this research is that the cybersecurity vulnerabilities for satellites are due to two main factors. First, as technology has advanced and become more available, there is a changing threat landscape where satellites launch is more accessible, increasing the risk that threat actors can compromise unprotected satellites. Second, space technology has lagged behind terrestrial information and cyber technology in its ability to adapt and overcome cybersecurity threats, creating vulnerabilities in satellite architectures. Another revelation is the disconnect between traditional software engineers and their cyber engineer counterparts, leading to a lack of understanding of key cyber-vulnerabilities during the design process. This leads to a consequential need to build cyber-protections into the design process from program initialization. Finally, the cyber tools in use today are also disconnected from the other traditional architectural design tools, leading to our conclusion that all of the tools must be integrated together under an MBSE design process, furthering the evolution of systems engineering while also encouraging the industry to incorporate cybersecurity into satellite programs from the beginning. Upon completion of this research project, the contributions are a scholarly review of the literature on cybersecurity threats and mitigation techniques in space and satellite systems, an evaluation of a set of cybersecurity requirements for satellite systems application, an MBSE example case for a cyber-security embedded satellite system, and an evaluation of the costs and benefits of an MBSE-enabled architecting process as applied to an industrial satellite system architecting process. The combination of this research represents novel contributions to the state of the field by defining the cybersecurity vulnerabilities for Space Systems and exhibiting how MBSE can aid in a cyber-secure architecting process.Item Open Access Vision based artificial intelligence for optimizing e-commerce experiences in virtual reality(Colorado State University. Libraries, 2025) Alipour, Panteha, author; Gallegos, Erika, advisor; Bradley, Thomas, committee member; Vans, Marie, committee member; Arefin, Mohammed, committee memberAdvancements in artificial intelligence (AI) and digital technologies have deeply reshaped consumer behavior and marketing strategies, demanding innovative approaches to decoding and optimizing customer engagement. This dissertation explores the potential of vision deep neural networks, generative AI, and virtual reality (VR) to analyze emotional and behavioral responses and enhance strategic business insights in digital commerce. This research focuses on convolutional neural network (CNN) architectures and evaluates their effectiveness in predicting consumer engagement through facial emotion recognition (FER). The dissertation addresses limitations in FER datasets by integrating synthetic data generated using generative adversarial networks (GANs) and real-world open data extracted from social media. This hybrid approach enhances model generalizability across diverse demographics and advertisement categories. The dissertation further investigates the role of immersive VR environments in influencing consumer engagement and purchase intent. By leveraging multi-modal causal analysis, it examines the interplay between VR design complexity, exposure sequencing, and emotional responses, providing actionable insights for optimizing e-commerce experiences. Ethical considerations are central to this research, which address biases, privacy concerns, and transparency in AI-driven decision-making. The findings contribute to the development of robust, inclusive, and scalable frameworks for personalized commerce, offering a transformative approach to understanding consumer behavior in digital environments. Through a systematic integration of vision deep learning, generative AI, and VR technologies, this dissertation bridges critical gaps in systems engineering research and business applications; advancing both theoretical understandings and practical applications in consumer engagement optimization.Item Open Access Eliciting cybersecurity goals for cyber-physical system conceptual design(Colorado State University. Libraries, 2025) Span, Martin "Trae", author; Daily, Jeremy, advisor; Bradley, Thomas, committee member; Simske, Steve, committee member; Wise, Dan, committee memberThis research contributes to the systems engineering body of knowledge by advancing security by design for Cyber-Physical Systems (CPS). It leverages Systems Thinking and Model-Based Systems Engineering (MBSE) methodologies to address both organizational and technical challenges in early-stage secure system development. The research is structured around two primary themes: (1) What recommendations can improve CPS Design Teams with respect to security? and (2) Proving secure system design be improved through early system security goal elicitation. To address the first research question, a systematic analysis utilizing Systems Thinking tools, such as iceberg models, causal loop diagrams, and system modeling, is conducted. These analyses identify the root causes of weak security design within CPS development teams, revealing systemic organizational challenges, ineffective mental models, and gaps in team member knowledge skills and abilities. The research presents targeted recommendations to enhance security considerations within design teams by implementing Systems Thinking principles, refining organizational structures, and prioritizing security training. However, findings indicate that training alone is insufficient for achieving secure CPS design, necessitating a more structured approach to security design consideration elicitation in early system development. The second research question is answered with the development of Eliciting Goals for Requirement Engineering of Secure Systems (EGRESS), a novel methodology designed to facilitate system security goal elicitation during the conceptual design phase of CPS. By addressing a critical gap in current systems engineering practices, EGRESS provides a structured and traceable approach to defining security goals before an architecture is established. This method incorporates best practices from Systems Thinking, loss-driven engineering analysis, and MBSE to ensure security is foundational in CPS design rather than an afterthought. Furthermore, the research evaluates the applicability of the Risk Analysis and Assessment Modeling Language (RAAML) standard for cybersecurity and proposes refinements to enhance its utility for security analysis in CPS design. The key contribution of this work utilizes Popper's falsification principle to evaluate the hypothesis that secure system design can be improved through early security goal elicitation. Given the lack of long-term operational data proving increased security over a system's lifecycle, falsification serves as a rigorous alternative by testing for refutation rather than statistical verification. The research demonstrates that EGRESS cannot be falsified, supporting its validity in improving secure system design. This claim is further reinforced through peer-reviewed evaluations and expert discussions within the system engineering and security communities, where, through publication, the methodology's utility was recognized and endorsed. Beyond methodology development, this research contributes to the broader systems engineering body of knowledge by addressing the distinction between requirements and security-focused system goals. It also explores the balance between common and custom SysML profiles to improve security goal elicitation. These contributions collectively support the advancement of more resilient and secure CPS architectures, aligning with the broader vision of integrating security as a fundamental design consideration alongside functionality and safety.Item Open Access Navigating the maze: the effectiveness of manufacturer support in applying user-controlled security and privacy features(Colorado State University. Libraries, 2025) Shorts, Kelvin R., author; Simske, Steve, advisor; Daily, Jeremy, committee member; Vans, Marie, committee member; Reisfeld, Brad, committee memberInternet of Things (IoT) technologies have reshaped the home computer environment by offering extraordinary levels of convenience, automation, and efficiency. With technologies ranging from thermostats that adjust for cost savings to water leak detectors that protect homes from costly water damage, IoT devices in the residential space are here to stay. Collectively, these interconnected devices targeted for the consumer home environment are commonly referred to as a "smart home". Despite the many capabilities that smart home IoT technologies offer, many consumers/end-users are still struggling with effectively securing their internet-connected devices, safeguarding personal data, and ensuring that their smart home network remains secure from potential threats. The responsibility for safeguarding smart home IoT devices is shared by both manufacturers and consumers/end-users; however, the extent to which manufacturers are providing clear, comprehensive, and accessible guidance to assist consumers/end-users with safeguarding IoT devices remains unclear. This research study explores the level of support provided by smart home IoT manufacturers in applying user-controlled security and privacy features. User-controlled security and privacy features are settings within an IoT device that only the end-user can adjust (e.g. passwords, multi-factor authentication, device permissions, data backup, etc.). A systems engineering–focused, mixed-methods approach was adopted to evaluate how effectively smart home IoT manufacturers guide and assist consumers in understanding, implementing, and maintaining user-controlled security and privacy features in their smart home IoT devices and systems. The study unfolds across four systems engineering phases: (1) Requirements Analysis, (2) Usability Testing, (3) Focus Group Technical Deep Dive, and (4) Recommendations and Future Implementations. A review of smart home IoT device manuals, online resources, and other manufacturer-provided materials established a baseline for how well the reference material aligned with cybersecurity industry standards, best practices, and recommendations. Through structured surveys, proficiency tests, and qualitative focus group technical deep dive feedback, the study identified gaps in smart home IoT manufacturers' guidance that compromise users' ability to configure essential security settings. Employing systems engineering principles, this research study underscored the importance of user-centric design and comprehensive security and privacy guidance to help bridge the gap between cybersecurity best practices and a diverse consumer/end- user skill base.Item Open Access Improving test case diversity for functional testing in computer vision-based systems(Colorado State University. Libraries, 2025) Reyna Pena, Ricardo, author; Simske, Steve, advisor; Troxell, Wade, committee member; Conrad, Steven, committee member; Cleary, Anne, committee memberArtificial Intelligence (AI) can serve as a powerful tool to enhance software testing within the Software Development Life Cycle (SDLC). By leveraging AI-driven testing, Quality Assurance Engineers (QAEs) can automate both simple and complex tasks, improving overall task accuracy and significantly accelerating the testing process. Traditionally, software testing is performed either manually or through automation. Manual testing, however, can be time-consuming and tedious, as QAEs must thoroughly review user stories with complex requirements, then translate them into comprehensive test cases. The challenge lies in ensuring that no critical steps are missed and that a sufficient number of test cases are created to fully meet all the requirements of the user story. Automation testing takes a different approach to that of manual testing. It involves creating scripts that can be applied to various types of software testing. However, building these scripts requires an experienced QAE who can translate test cases into programming classes, each containing multiple functions that cover the steps of the test case. Coding plays a crucial role in developing automation scripts that require updating, as well. This can lead to additional time and costs, making it essential to have the right resources to ensure a smooth deployment and maintain customer satisfaction. While both manual and automation testing are necessary tools for testing new software, they often demand more resources than smaller or even larger QAE teams can easily allocate. With advancements in AI, we can integrate computer vision (CV), a subfield of AI, to enhance automation testing by enabling navigation through websites in both mobile and desktop views. CV can be used to extract key information from applications, which is then utilized to automatically generate test cases. To further refine and complete the test case descriptions, a Large Language Model (LLM) is employed, providing more detailed and accurate documentation. In this dissertation, we introduce a novel concept designed to assist stakeholders across the SDLC during the testing phase. Additionally, we aim to evaluate the effectiveness of our approach by addressing key research questions that will guide us in determining whether using CV and LLMs to generate test cases offers broader test coverage and requires less maintenance compared to traditional manual and automated testing methods. The system is built on a supervised learning approach, utilizing 2000 labeled images from websites and mobile applications which combine represent 26 classes of UI components. These images were trained using two different CV algorithms: YOLOv8 and Detectron2, with a recommendation to explore AWS Rekognition in future research. To enhance the system's adaptability, robustness, and efficiency, we applied the Predictive Selection with Secondary engines pattern to further optimize its design. The detection results are leveraged to generate test cases, with ChatGPT, an LLM, assisting in the creation of detailed descriptions for each test case. The performance of YOLOv8 is evaluated using metrics such as mAP, precision, F1-score, and others across various YOLOv8 models trained for 100 epochs. Similarly, results for Detectron2 are evaluated over 20 epochs using two different models: R101 (RetinaNet) and X101-FPN (Faster R-CNN). ChatGPT successfully generated comprehensive test case descriptions, and various evaluation techniques, such as A/B testing, were implemented to analyze the quality of the generated text. Once the test cases were created, they were compared to both manually and automatically generated test cases to determine which areas of functional testing were covered. The primary goal of this research is to provide QAEs with an additional tool or approach (that is, a process) to enhance their software testing efforts, ultimately improving testing efficiency and coverage.Item Open Access Evaluation of a model-based approach to accrediting United States government information technology systems following the authorization to operate process(Colorado State University. Libraries, 2025) Sanchez, Edan Christopher, author; Bradley, Thomas H., advisor; Borky, John M., committee member; Sega, Ronald, committee member; Zhao, Jianguo, committee memberThis research project explores Model-Based Systems Engineering (MBSE) methodology as a modernized, alternative strategy to improve the United States Government's (USG) accreditation processes and procedures for accepting new/updated information systems. While the primary goal is to significantly accelerate the transition of advanced technology to operational environments, it is imperative that we take advantage of the potential benefits realized through the implementation of a model-based process. While this dissertation primarily focuses on defense systems within the USG domain, the principles discussed are applicable in a broader context. This research focuses on the application of MBSE to defense Information Technology (IT) systems, or simply Information Systems (IS) that requires an Authorization to Operate (ATO). Currently, the security accreditation process for obtaining an ATO for Government systems is primarily document-centric. This approach often leads to frequent schedule overruns, significantly increasing costs and negatively impacting stakeholders. This issue is particularly pronounced for large, software- and data-intensive systems, such as those utilized by the Department of Defense (DoD), Intelligence, and command and control (C2) operations. The complexity of authorization is significantly magnified when systems incorporate third-party applications requiring independent accreditation, creating cascading dependencies that impact overall system security and deployment timelines, as well as for real-time systems that must meet stringent cybersecurity requirements while adhering to strict process deadlines. Mission effectiveness is compromised when operators and end users experience delays in accessing essential tools. The trend toward implementing these types of IT systems is accelerating, highlighting the urgent need to enhance their authorization processes. The proposed approach aims to capture the existing ATO process using a formal Systems Modeling Language (SysML) model. This model will facilitate an analysis to identify bottlenecks, redundant activities, missing interfaces, and other areas of concern. Once the model is developed and analyzed, corrective actions and proposed improvements will be introduced to enhance the process model. The potential benefits will be quantified in terms of speed-to-operations, particularly regarding schedules, as well as improvements in consistency and efficiency throughout the end-to-end process, ultimately leading to a potential reduction in overall system costs. Furthermore, the anticipated gains will be validated through modeling and analysis of the enhanced process as applied to a representative IT system, also represented in SysML. This modeled IT system will reflect the cloud-centric environments currently found in operational contexts, utilizing approved tools and technologies available to development contractors. This research will assess the impact of MBSE on the ATO. It aims to measure MBSE's effectiveness in mitigating inconsistencies, streamlining system deployment timelines, enhancing quality, reducing costs, and delivering other advantages in this practical context. The conclusions drawn from this study will establish a framework for investing in the modernization of the ATO towards a systems-engineered, model-based approach, particularly within the realm of USG systems development. The model-based ATO process will facilitate integration with the federal Digital Engineering (DE) transformation as DE continues to broaden its presence within the federal systems engineering landscape.