Theses and Dissertations
Permanent URI for this collection
Browse
Browsing Theses and Dissertations by Issue Date
Now showing 1 - 20 of 106
Results Per Page
Sort Options
Item Open Access Using operational risk to increase systems engineering effectiveness(Colorado State University. Libraries, 2016) Gallagher, Brian P., author; Sega, Ronald M., advisor; Chong, Edwin, committee member; Young, Peter, committee member; Bradley, Thomas, committee memberA key activity in the systems engineering process is managing risk. Systems engineers transform end-user needs into requirements that then drive design, development, and deployment activities. Experienced systems engineers are aware of both programmatic risk and technical risk and how these risks impact program outcomes. A programmatic change to cost, schedule, process, team structure, or a wide variety of other elements may impact the engineering effort and increase the risk of failing to deliver a product or capability when needed, with all required functionality, at the promised cost. Technical challenges may introduce risk as well. If a subcomponent or element of the design is immature or doesn’t perform as expected, additional effort may be required to redesign the element or may even necessitate a change in requirements or a complete system re-design. Anticipating programmatic and technical risks and implementing plans to mitigate these risks is part of the systems engineering process. Even with a potent risk management process in place, end-users reject new capabilities when the iii delivered capabilities fail to perform to their expectations or fail to address the end-user’s operational need. The time between the identification of an operational need and the delivery of the resulting capability may be months or even years. When delivered, the new capability either does not fulfil the original need or the need has evolved over time. This disconnect increases operational risk to the end-user’s mission or business objectives. When systems engineers explicitly identify and mitigate operational risk, in addition to programmatic and technical risk, program outcomes are more likely to meet the end-user’s real operational need. The purpose of this research is first to define the activities that could be used by systems engineers to ensure that engineering activities are influenced by operational risk considerations. Secondly, to determine if a focus on operational risk during the systems engineering lifecycle has a positive impact on program outcomes. A structured approach to addressing operational risk during the systems engineering process, Operational Risk-Driven Engineering Requirements/Engineering Development (ORDERED), is introduced. ORDERED includes an exhaustive operational risk taxonomy designed to assist systems engineers with incorporating the end-user’s evolving operational risk considerations into systems engineering activities. iv To examine the relationship between operational risk considerations during the systems engineering process and program outcomes, a survey instrument was developed and administered. In addition, a system dynamics model was developed to examine the relationship between operational risk and technical debt. Finally, case studies of successful and challenged programs were evaluated against characteristics of successfully addressing operational risk during the program lifecycle. These activities lead to the conclusion that a focus on operational risk during the systems engineering lifecycle has a positive impact on program outcomes.Item Open Access Cloud Computing cost and energy optimization through Federated Cloud SoS(Colorado State University. Libraries, 2017) Biran, Yahav, author; Collins, George J., advisor; Pasricha, Sudeep, advisor; Young, Peter, committee member; Borky, John M., committee member; Zimmerle, Daniel J., committee memberThe two most significant differentiators amongst contemporary Cloud Computing service providers have increased green energy use and datacenter resource utilization. This work addresses these two issues from a system's architectural optimization viewpoint. The proposed approach herein, allows multiple cloud providers to utilize their individual computing resources in three ways by: (1) cutting the number of datacenters needed, (2) scheduling available datacenter grid energy via aggregators to reduce costs and power outages, and lastly by (3) utilizing, where appropriate, more renewable and carbon-free energy sources. Altogether our proposed approach creates an alternative paradigm for a Federated Cloud SoS approach. The proposed paradigm employs a novel control methodology that is tuned to obtain both financial and environmental advantages. It also supports dynamic expansion and contraction of computing capabilities for handling sudden variations in service demand as well as for maximizing usage of time varying green energy supplies. Herein we analyze the core SoS requirements, concept synthesis, and functional architecture with an eye on avoiding inadvertent cascading conditions. We suggest a physical architecture that diminishes unwanted outcomes while encouraging desirable results. Finally, in our approach, the constituent cloud services retain their independent ownership, objectives, funding, and sustainability means. This work analyzes the core SoS requirements, concept synthesis, and functional architecture. It suggests a physical structure that simulates the primary SoS emergent behavior to diminish unwanted outcomes while encouraging desirable results. The report will analyze optimal computing generation methods, optimal energy utilization for computing generation as well as a procedure for building optimal datacenters using a unique hardware computing system design based on the openCompute community as an illustrative collaboration platform. Finally, the research concludes with security features cloud federation requires to support to protect its constituents, its constituents tenants and itself from security risks.Item Open Access Technological advances, human performance, and the operation of nuclear facilities(Colorado State University. Libraries, 2017) Corrado, Jonathan K., author; Sega, Ronald M., advisor; Bradley, Thomas H., committee member; Chong, Edwin K. P., committee member; Young, Peter M., committee memberMany unfortunate and unintended adverse industrial incidents occur across the United States each year, and the nuclear industry is no exception. Depending on their severity, these incidents can be problematic for people, the facilities, and surrounding environments. Human error is a contributing factor in many such incidents. This dissertation first explored the hypothesis that technological changes that affect how operators interact within the systems of the nuclear facilities exacerbate the cost of incidents caused by human error. I conducted a review of nuclear incidents in the United States from 1955 through 2010 that reached Level 3 (serious incident) or higher on the International Nuclear Events Scale (INES). The cost of each incident at facilities that had recently undergone technological changes affecting plant operators' jobs was compared to the cost of events at facilities that had not undergone changes. A t-test determined a statistically significant difference between the two groups, confirming the hypothesis. Next, I conducted a follow-on study to determine the impact of the incorporation of new technologies into nuclear facilities. The data indicated that spending more money on upgrades increased the facility's capacity as well as the number of incidents reported, but the incident severity was minor. Finally, I discuss the impact of human error on plant operations and the impact of evolving technology on the 21st-century operator, proposing a methodology to overcome these challenges by applying the systems engineering process.Item Open Access System level risk analysis of electromagnetic environmental effects and lightning effects in aircraft -- steady state and transient(Colorado State University. Libraries, 2017) Lee, James Y., author; Collins, George J., advisor; Borky, John M., committee member; Cale, James L., committee member; Ackerson, Christopher J., committee memberThis dissertation is an investigation of the system level risk of electromagnetic and lightning effects in aircraft. It begins with an analysis to define a system, and a discussion of emergence as a characteristic of a system. Against this backdrop, risk is defined as an undesirable emergent property of a system. A procedure to translate the system level non-functional attributes to lower level functional requirements is developed. With this foundation, a model for risk analysis, resolution and management is developed by employing the standard risk model. The developed risk model is applied to evaluation of electromagnetic environmental effects and lightning effects in aircraft. Examples are shown to demonstrate the validity of the model. Object Process Methodology and systems thinking principles are used extensively throughout this work. The dissertation concludes with a summary and suggestions for additional work.Item Open Access Application of systems engineering to complex systems and system of systems(Colorado State University. Libraries, 2017) Sturdivant, Rick L., author; Chong, Edwin K. P., advisor; Sega, Ronald M., committee member; Jayasumana, Anura P., committee member; Atadero, Rebecca, committee memberThis dissertation is an investigation of system of systems (SoS). It begins with an analysis to define, with some rigor, the similarities and differences between complex systems and SoS. With this foundation, the baseline concept is development for several different types of systems and they are used as a practical approach to compare and contrast complex systems versus SoS. The method is to use a progression from simple to more complex systems. Specifically, a pico hydro electric power generation system, a hybrid renewable electric power generation system, a LEO satellites system, and Molniya orbit satellite system are investigated. In each of these examples, systems engineering methods are applied for the development of a baseline solution. While these examples are complex, they do not rise to the level of a SoS. In contrast, a multi-spectral drone detection system for protection of airports is investigated and a baseline concept for it is generated. The baseline is shown to meet the minimum requirements to be considered a SoS. The system combines multiple sensor types to distinguish drones as targets. The characteristics of the drone detection system which make it a SoS are discussed. Since emergence is considered by some to be a characteristic of a SoS, it is investigated. A solution to the problem of determining if system properties are emergent is presented and necessary and sufficient conditions for emergence are developed. Finally, this work concludes with a summary and suggestions for additional work.Item Open Access A graph-based, systems approach for detecting violent extremist radicalization trajectories and other latent behaviors(Colorado State University. Libraries, 2017) Hung, Benjamin W. K., author; Jayasumana, Anura P., advisor; Chong, Edwin K. P., committee member; Ray, Indrajit, committee member; Sega, Ronald M., committee memberThe number and lethality of violent extremist plots motivated by the Salafi-jihadist ideology have been growing for nearly the last decade in both the U.S and Western Europe. While detecting the radicalization of violent extremists is a key component in preventing future terrorist attacks, it remains a significant challenge to law enforcement due to the issues of both scale and dynamics. Recent terrorist attack successes highlight the real possibility of missed signals from, or continued radicalization by, individuals whom the authorities had formerly investigated and even interviewed. Additionally, beyond considering just the behavioral dynamics of a person of interest is the need for investigators to consider the behaviors and activities of social ties vis-à -vis the person of interest. We undertake a fundamentally systems approach in addressing these challenges by investigating the need and feasibility of a radicalization detection system, a risk assessment assistance technology for law enforcement and intelligence agencies. The proposed system first mines public data and government databases for individuals who exhibit risk indicators for extremist violence, and then enables law enforcement to monitor those individuals at the scope and scale that is lawful, and account for the dynamic indicative behaviors of the individuals and their associates rigorously and automatically. In this thesis, we first identify the operational deficiencies of current law enforcement and intelligence agency efforts, investigate the environmental conditions and stakeholders most salient to the development and operation of the proposed system, and address both programmatic and technical risks with several initial mitigating strategies. We codify this large effort into a radicalization detection system framework. The main thrust of this effort is the investigation of the technological opportunities for the identification of individuals matching a radicalization pattern of behaviors in the proposed radicalization detection system. We frame our technical approach as a unique dynamic graph pattern matching problem, and develop a technology called INSiGHT (Investigative Search for Graph Trajectories) to help identify individuals or small groups with conforming subgraphs to a radicalization query pattern, and follow the match trajectories over time. INSiGHT is aimed at assisting law enforcement and intelligence agencies in monitoring and screening for those individuals whose behaviors indicate a significant risk for violence, and allow for the better prioritization of limited investigative resources. We demonstrated the performance of INSiGHT on a variety of datasets, to include small synthetic radicalization-specific data sets, a real behavioral dataset of time-stamped radicalization indicators of recent U.S. violent extremists, and a large, real-world BlogCatalog dataset serving as a proxy for the type of intelligence or law enforcement data networks that could be utilized to track the radicalization of violent extremists. We also extended INSiGHT by developing a non-combinatorial neighbor matching technique to enable analysts to maintain visibility of potential collective threats and conspiracies and account for the role close social ties have in an individual's radicalization. This enhancement was validated on small, synthetic radicalization-specific datasets as well as the large BlogCatalog dataset with real social network connections and tagging behaviors for over 80K accounts. The results showed that our algorithm returned whole and partial subgraph matches that enabled analysts to gain and maintain visibility on neighbors' activities. Overall, INSiGHT led to consistent, informed, and reliable assessments about those who pose a significant risk for some latent behavior in a variety of settings. Based upon these results, we maintain that INSiGHT is a feasible and useful supporting technology with the potential to optimize law enforcement investigative efforts and ultimately enable the prevention of individuals from carrying out extremist violence. Although the prime motivation of this research is the detection of violent extremist radicalization, we found that INSiGHT is applicable in detecting latent behaviors in other domains such as on-line student assessment and consumer analytics. This utility was demonstrated through experiments with real data. For on-line student assessment, we tested INSiGHT on a MOOC dataset of students and time-stamped on-line course activities to predict those students who persisted in the course. For consumer analytics, we tested the performance on a real, large proprietary consumer activities dataset from a home improvement retailer. Lastly, motivated by the desire to validate INSiGHT as a screening technology when ground truth is known, we developed a synthetic data generator of large population, time-stamped, individual-level consumer activities data consistent with an a priori project set designation (latent behavior). This contribution also sets the stage for future work in developing an analogous synthetic data generator for radicalization indicators to serve as a testbed for INSiGHT and other data mining algorithms.Item Open Access Improving construction machine engine system durability in Latin American conditions(Colorado State University. Libraries, 2018) Azevedo, Kurt Milward, author; Olsen, Daniel, advisor; Bradley, Thomas, committee member; Grigg, Neil, committee member; Strong, Kelly, committee memberBetween 2016 and 2030, the Latin America region needs to spend $7 trillion dollars (Bridging global infrastructure gaps, 2016). Thus, for the foreseeable future, the Latin American market will experience high demand for construction equipment such as backhoes, excavators, crawler-dozers, and loaders to construct roads, housing, airports, and sea ports. Construction equipment employed in Latin America operates in conditions which are often more severe compared to developed countries such as the United States. Consequently, the durability of construction equipment diesel engines is reduced within the context of the system engineering life cycle. This results in a greater number of warranty claims, increased customer product dissatisfaction, and delays in completing contracted projects. Peer-reviewed literature lacks information regarding the wear and failure of construction equipment diesel engines operating in Latin America. Thus, the purpose of this research is to contribute to the system and maintainability engineering fields of knowledge by analyzing oil samples taken from diesel engines operating in Latin America. Oil samples are leading indicators and predictors for wear in specific components of diesel engines, as they directly connect to the use conditions of actual work environments. The methodology approach considers data points from different sources and countries. The engine oil sample analysis results are evaluated in the context of local diesel fuel quality, machine diagnostic trouble codes, and the work environments for the following countries: Bolivia, Colombia, Costa Rica, Dominican Republic, Ecuador, Guatemala, Honduras, Mexico, Paraguay, Peru, and Uruguay. The following data sources are used to answer the research questions: (1) database of oil sample laboratories of eleven countries, (2) construction equipment diagnostic trouble codes, (3) construction equipment surveys, (3) John Deere service manager's surveys, (4) two John Deere 200D excavators, (5) engine operating data, and (6) Engine Control Unit sensor data. It is determined that cross-system contamination was key contributors of oil contamination. Contamination related to environmental conditions in which the equipment was operated is also a key factor, as there is a high statistical correlation of sodium, silicone, and aluminum oil contamination present in the oil of equipment operating at higher altitudes. It is determined that sulfur, diesel fuel quality, humidity, bio-diesel, temperature, and altitude are factors that must be considered in relation to diesel engine reliability and maintenance. The research found that by correlating the engine oil sample contamination with the environment risk drivers (a) altitude and diesel fuel quality have the greatest impact on iron readings, (b) bio-diesel impacts copper, and (c) precipitation and poor diesel quality are associated with silicon levels. Wear metals present in the oil samples indicate that scheduled maintenance frequency must not exceed 250 hours for diesel engines operating in many areas of Latin America. The leading and earliest indicator of engine wear is a high level of iron particles in the engine oil, reaching abnormal levels at 218 hours. The research found that engine idling for extended periods contributes to soot accumulation.Item Open Access Modeling fuzzy criteria preference to evaluate tradespace of system alternatives(Colorado State University. Libraries, 2018) White, Wesley Gunnar, author; Chandrasekar, V., advisor; Bradley, Thomas, committee member; Chavez, Jose, committee member; Jayasumana, Anura P., committee memberThis dissertation explores techniques for evaluating system concepts using the point of diminishing marginal utility to determine a best value alternative with an optimal combination of risk, performance, reliability, and life cycle cost. The purpose of this research is to address the uncertainty of customer requirements and assess crisp and fuzzy design parameters to determine a best value system. At the time of this research, most commonly used decision analysis (DA) techniques use minimum and maximum values under a specific criterion to evaluate each alternative. These DA methods do not restrict scoring beyond the point of diminished marginal utility resulting in superfluous capabilities and overvalued system alternatives. Using these models, an alternative being evaluated could receive significantly higher scores when reported capabilities are greater than ideal customer requirements. This problem is pronounced whenever weights are applied to criteria where excessive capabilities are recorded. The techniques explored in this dissertation utilize fuzzy membership functions to restrict scoring for alternatives that provide excess capabilities beyond ideal customer requirements. This research investigates and presents DA techniques for evaluating system alternatives that determine an ideal compromise between risk, performance criteria, reliability and life cycle costs.Item Open Access Maintaining leachate flow through a leach bed reactor during anaerobic digestion of high-solids cattle manure(Colorado State University. Libraries, 2018) Lewis, Matthew A., author; Sharvelle, Sybil, advisor; Grigg, Neil, committee member; Quinn, Jason, committee memberTo address the accumulation of high-solids cattle manure (HSCM) found at many of the state's Animal Feeding Operations (AFOs), researchers at CSU have developed a Multi-Stage Anaerobic Digester (MSAD). The MSAD system consists of a leach bed reactor (LBR), a compositing tank, and a fixed-film methanogenic reactor. The LBR is a critical part of the MSAD system since hydrolysis can be a rate-limiting step in the anaerobic digestion of HSCM (Hinds 2015; Veeken and Hamelers 1999). To ensure that hydrolysis is occurring properly within the reactor, leachate injection and reactor operation must proceed in a manner that facilitates uniform distribution of leachate through the manure waste bed. Since the leachate must be recirculated through the LBR for the entirety of the batch digestion time, any phenomena that disrupt the duration or uniformity of leachate distribution must be addressed. The overarching goal of this thesis project was to improve the hydraulic performance of the LBR stage of the MSAD. This research included a multi-criterion decision analysis (MCDA) to assess unique design aspects of the MSAD relative to other technologies, construction and operation of a prototype LBR, and the development of an experimentation strategy to assess mechanism of hydraulic failure in the LBR. The MSAD system was compared to four other high-solids anaerobic digester technologies using a MCDA. The purpose of this comparison was to identify unique design features of the MSAD technology compared to other high-solids anaerobic digestion technologies to inform the focus of future design and research activities. The technologies were rated and evaluated for the following criteria: operational requirements, impact of hydraulic failure, capital requirements, operational control, feedstock technology fit, and process efficiency. The scores ranged from 2.9 to 3.7 out of 5 possible points. Under equal criteria weighting, the MSAD system received the highest rating with a score of 3.7. The MSAD system received high ratings due to its strong hydraulic performance, operational control, and process efficiency. Knowledge gained through laboratory and prototype-scale LBR experimentation was used to establish possible improvements to LBR design. The primary improvement to the LBR was the modification from a downflow to an upflow configuration. A prototype LBR was operated in the upflow configuration to facilitate longer durations of undisrupted leachate permeation. In addition, it was determined that leachate injection spacing should be studied further as results from operation of the prototype LBR suggested that higher volatile solids reduction occurred closer to the leachate influent manifold. Column experiments and prototype operation showed some successful operation of LBRs for treating HSCM. However, hydraulic failures due to clogging and preferential pathway formation were observed. Due to the continued risk of hydraulic failure, further research was needed to understand mechanisms for hydraulic failure and to determine approaches to overcome these issues. At commercial scale, hydraulic failure of LBRs would result in decreased energy and agricultural product output and increased operating costs. Since commercial processes rely on reproducible results, a high degree of LBR reliability is required to achieve technical and economic feasibility. Therefore, control over the hydraulic performance of LBRs is critical for commercialization of the MSAD system. To this end, an experimentation strategy was developed, with the goal to elucidate the mechanisms behind hydraulic failures occurring in the LBR. To evaluate these mechanisms, the experimentation strategy recommends the use of electrical resistivity tomography (ERT) to render visualizations of leachate distribution throughout the waste bed. Further characterization of the pore space network geometry at the microscale using either Magnetic Resonance Imaging (MRI) or X-ray Computed Tomography (X-ray CT) is recommended.Item Open Access Applying model-based systems engineering to architecture optimization and selection during system acquisition(Colorado State University. Libraries, 2018) LaSorda, Michael, author; Sega, Ronald M., advisor; Borky, Mike, advisor; Bradley, Tom, committee member; Quinn, Jason, committee memberThe architecture selection process early in a major system acquisition is a critical step in determining the overall affordability and technical performance success of a program. There are recognized deficiencies that frequently occur in this step such as poor transparency into the final selection decision and excessive focus on lowest cost, which is not necessarily the best value for all of the stakeholders. This research investigates improvements to the architecture selection process by integrating Model-Based Systems Engineering (MBSE) techniques, enforcing rigorous, quantitative evaluation metrics with a corresponding understanding of uncertainties, and stakeholder feedback in order to generate an architecture that is more optimized and trusted to provide better value for the stakeholders. Three case studies were analyzed to demonstrate this proposed process. The first focused on a satellite communications System of Systems (SoS) acquisition to demonstrate the overall feasibility and applicability of the process. The second investigated an electro-optical remote sensing satellite system to compare this proposed process to a current architecture selection process typified by the United States Department of Defense (U.S. DoD) Analysis of Alternatives (AoA). The third case study analyzed the evaluation of a service-oriented architecture (SOA) providing satellite command and control with cyber security protections in order to demonstrate rigorous accounting of uncertainty through the architecture evaluation and selection. These case studies serve to define and demonstrate a new, more transparent and trusted architecture selection process that consistently provides better value for the stakeholders of a major system acquisition. While the examples in this research focused on U.S. DoD and other major acquisitions, the methodology developed is broadly applicable to other domains where this is a need for optimization of enterprise architectures as the basis for effective system acquisition. The results from the three case studies showed the new process outperformed the current methodology for conducting architecture evaluations in nearly all criteria considered and in particular selects architectures of better value, provides greater visibility into the actual decision making, and improves trust in the decision through a robust understanding of uncertainty. The primary contribution of this research then is improved information support to an architecture selection in the early phases of a system acquisition program. The proposed methodology presents a decision authority with an integrated assessment of each alternative, traceable to the concerns of the system's stakeholders, and thus enables a more informed and objective selection of the preferred alternative. It is recommended that the methodology proposed in this work is considered for future architecture evaluations.Item Open Access Voltage reduction and automation on the residential distribution grid(Colorado State University. Libraries, 2018) Meller, Ryan, author; Collins, George, advisor; Borky, John, committee member; Young, Peter, committee member; Marchese, Anthony, committee memberThis paper represents the culmination of my research on the effects of voltage reduction and automation on the residential distribution grid. Although voltage reduction has been in use for many years, the strategies identified and tested through my research increase savings for utilities by reducing demand during peak periods. In addition, by automating switching to transfer load on the system, utilities will benefit not only during outage events, but in alleviating load on substations and equipment nearing capacity during load control events. The energy grid has benefited from a number of efficiencies in the past several years; however, system peaks continue to be problematic for electric utilities from both a cost and infrastructure perspective. The following presentation sets forth automated voltage reduction techniques, as well as automated switching approaches on distribution line sections, in an effort to appropriately address these concerns.Item Open Access Autonomous UAV control and testing methods utilizing partially observable Markov decision processes(Colorado State University. Libraries, 2018) Eaton, Christopher M., author; Chong, Edwin K. P., advisor; Maciejewski, Anthony A., advisor; Bradley, Thomas, committee member; Young, Peter, committee memberThe explosion of Unmanned Aerial Vehicles (UAVs) and the rapid development of algorithms to support autonomous flight operations of UAVs has resulted in a diverse and complex set of requirements and capabilities. This dissertation provides an approach to effectively manage these autonomous UAVs, effectively and efficiently command these vehicles through their mission, and to verify and validate that the system meets requirements. A high level system architecture is proposed for implementation on any UAV. A Partially Observable Markov Decision Process algorithm for tracking moving targets is developed for fixed field of view sensors while providing an approach for more fuel efficient operations. Finally, an approach for testing autonomous algorithms and systems is proposed to enable efficient and effective test and evaluation to support verification and validation of autonomous system requirements.Item Open Access A balance of design methodology for enterprise quality attribute consideration in System-of-Systems architecting(Colorado State University. Libraries, 2019) Nelson, Travis J., author; Borky, John M., advisor; Sega, Ronald M., advisor; Bradley, Thomas K., committee member; Roberts, Nicholas H., committee memberAn objective of System-of-Systems (SoS) engineering work in the Defense community is to ensure optimal delivery of operational capabilities to warfighters in the face of finite resources and constantly changing conditions. Assurance of enterprise-level capabilities for operational users in the Defense community presents a challenge for acquisitions in balancing multiple SoS architectures versus the more traditional system-based optimization. The problem is exacerbated by the complexity of SoS being realized by multiple, heterogeneous, independently-managed systems that interact to provide these capabilities. Furthermore, the comparison of candidate SoS architectures for selection of the design that satisfies the most enterprise-level objectives and how such decisions affect the future solution space lead to additional challenges in applying existing frameworks. As a result of the enormous challenge associated with enterprise capability development, this research proposes an enterprise architecting methodology leveraging SoS architecture data in the context of multiple enterprise-level objectives to enable the definition of candidate architectures for comparison and decision-making. In this context, architecture-based quality attributes of the enterprise (e.g., resilience, agility, changeability) must be considered. This research builds and extends previous SoS engineering work in the Department of Defense (DoD) to develop a process framework that can improve the analysis of architectural attributes within an enterprise. Certain system attributes of interest are quantified using selected Quality Attributes (QAts). The proposed process framework enables the identification of the quality attributes of interest as the desired characteristics to be balanced against performance measures. QAts are used to derive operational activities as well as design techniques for employment against an as-is SoS architecture. These activities and techniques are then mapped to metrics used to compare alternative architectures. These alternatives enable an SoS-based balance of design for performance and quality attribute optimization while employing a capability model to provide a comparison of available alternatives against overarching preferences. Approaches are then examined to analyze performance of the alternatives in meeting the enterprise capability objectives. These results are synthesized to enable an analysis of alternatives (AoA) to produce a "should-be" architecture vector based on a selected "to-be" architecture. A comparison of the vector trade space is discussed as a forward work in relation to the original enterprise level objectives for decision-making. The framework is illustrated using three case studies including a DoD Satellite Communications (SATCOM) case study; Position, Navigation, and Timing (PNT) case study; and a satellite operations "as-a-service" case study. For the SATCOM case study specifically, the question is considered of whether a certain QAt—resilience—can best be achieved through design alternatives of satellite disaggregation or diversification. The analysis shows that based on the metric mapping and design alternatives examined, diversification provides the greatest SATCOM capability improvement compared to the base architecture, while also enhancing resilience. These three separate cases studies show the framework can be extended to address multiple similar issues with system characteristics and SoS architecture questions for a wide range of enterprises.Item Open Access A systems engineering approach to community microgrid electrification and sustainable development in Papua New Guinea(Colorado State University. Libraries, 2019) Anderson, Alexander A., author; Suryanarayanan, Siddharth, advisor; Cale, James, committee member; Zimmerle, Dan, committee member; Chen, Suren, committee memberElectrification of remote communities worldwide represents a key necessity for sustainable development and advancement of the 17 United Nations Sustainable Development Goals (SDGs). With over 1 billion people still lacking access to electricity, finding new methods to provide safe, clean, reliable, and affordable energy to off-grid communities represents an increasingly dynamic area of research. However, traditional approaches to power system design focused exclusively on traditional metrics of cost and reliability do not provide a sufficiently broad view of the profound impact of electrification. Installation of a single microgrid is a life-changing experience for thousands of people, including both residents who receive direct electricity service and numerous others who benefit from better education, new economic opportunities, incidental job creation, and other critical infrastructure systems enabled by electricity. Moreover, an electrification microgrid must directly satisfy community needs, be sensitive to local environmental constraints, mitigate possible risks, and plan for at least a decade of sustainable operations and maintenance. These considerations extend beyond the technical and optimization problems typically addressed in microgrid design. An enterprise system-of-systems framework for microgrid planning considering technical, economic, environmental, and social criteria is developed in response to the need for a comprehensive methodology for planning of community electrification projects. This framework spans the entire systems engineering discipline and incorporates elements from project management, risk management, enterprise architecture, numerical optimization, and multi-criteria decision-making, and sustainable development theory. To support the creation of the systems engineering framework, a comprehensive survey of multi-objective optimization formulations for planning and dispatch of islanded microgrids was conducted to form a baseline for further discussion. This survey identifies that all optimizations studies of islanded microgrids are based on formulations selecting a combination of 16 possible objective functions, 14 constraints, and 13 control variables. A sufficient group of decision-making elicitees are formed from the group of nearly 250 publications surveyed to create a comprehensive optimization framework based on technical, economic, environmental, and social attributes of islanded microgrids. This baseline enables the formulation of a flexible, computationally lightweight methodology for microgrid planning in consideration of multiple conflicting objectives using the simple multi-attribute ranking technique exploiting ranks (SMARTER). Simultaneously, the identified technical, economic, environmental, and social decision criteria form a network of functional, operational, and performance requirements in an enterprise system-of-systems structure that considers all stakeholders and actors in the development of community electrification microgrids. This framework considers community capacity building and sustainable development theory as a hierarchical structure, where each layer of the hierarchy is mapped both to a set of organizational, financial, and physical subsystems and to a corresponding subset of the 17 SDGs. The structure presents the opportunity not only to integrate classical project management and risk management tools, but also to create a new lifecycle for planning, funding, executing, and monitoring multi-phase community infrastructure projects. Throughout the research, a case study of the Madan Community in Jiwaka Province, Papua New Guinea is used to demonstrate the systems engineering concepts and tools developed by the research. The community is the center of multi-phase community capacity building project addressing critical needs of the deep rural community, including electricity, education, water, sanitation, healthcare, and economic opportunities. The researcher has been involved as a pro-bono consultant for the project since 2013 and helped raise over $1M USD in infrastructure materials, equipment, and consulting. The structure of the community-based organization and numerical optimization of a series of islanded microgrids are used to illustrate both the system-of-systems hierarchy and microgrid planning techniques based on both single-objective optimization using linear programming and the SMARTER methodology for consideration of multiple qualitative and quantitative decision criteria.Item Open Access Modelling and analysis of systems on offshore oil and gas platforms(Colorado State University. Libraries, 2019) Grassian, David, author; Olsen, Daniel, advisor; Bradley, Thomas, committee member; Carlson, Kenneth, committee member; Marchese, Anthony, committee memberThis research examines oil and gas systems from the seemingly underutilized perspective of energy; this is counterintuitive since the energy content of hydrocarbon products is its most distinguishing characteristic and the very reason why it is valued by society. It is clear that the amount of energy required to extract crude oil is increasing over time, at the long-term global level, and at the much shorter time span of individual fields. The global trend is a well-documented phenomenon and is related to the depletion of the most energetically favorable reservoirs and a coincidental growing global demand for energy. Concerning existing fields, it is often necessary to implement increasingly higher energy intensity methods to extract the remaining crude oil resources. These trends are the impetus for the industry to gain a better understanding of the relationship between the application of energy and the production of crude oil across a wide spectrum of production methods.Item Open Access Innovative hydrogen station operation strategies to increase availability and decrease cost(Colorado State University. Libraries, 2019) Kurtz, Jennifer, author; Bradley, Thomas, advisor; Willson, Bryan, committee member; Suryanarayanan, Siddharth, committee member; Ozbek, Mehmet, committee memberMajor industry, government, and academic teams have recently published visions and objectives for widespread use of hydrogen in order to enable international energy sector goals such as sustainability, affordability, reliability, and security. Many of these visions emphasize the important and highly-scalable use of hydrogen in fuel cell electric cars, trucks, and buses, supported by public hydrogen stations. The hydrogen station is a complicated system composed of various storage, compression, and dispensing sub-systems, with the hydrogen either being delivered via truck or produced on-site. As the number of fuel cell electric vehicles (FCEVs) on roads in the U.S. have increased quickly, the number of hydrogen stations, the amount of hydrogen dispensed, and the importance of their reliability and availability to FCEV drivers has also increased. For example, in California, U.S., the number of public, retail hydrogen stations increased from zero to more than 30 in less than 2 years, and the annual hydrogen dispensed increased from 27,400 kg in 2015 to nearly 105,000 kg in 2016, and more than 913,000 kg in 2018, an increase of nearly 9 times in 2 years for retail stations. So, although government, industry, and academia have studied many aspects of hydrogen infrastructure, much of the published literature does not address hydrogen station operational and system innovations even though FCEV and hydrogen stations have some documented problems with reliability, costs, and maintenance in this early commercialization phase. In general, hydrogen station research and development has lagged behind the intensive development effort that has been allocated to hydrogen FCEVs. Based on this understanding of the field, this research aims to identify whether integrating reliability engineering analysis methods with extensive hydrogen station operation and maintenance datasets can address the key challenge of station reliability and availability. The research includes the investigation and modeling of real-world hydrogen station operation and maintenance. This research first documents and analyzes an extensive dataset of hydrogen station operations to discover the state-of-the-art of current hydrogen station capabilities, and to identify performance gaps with key criteria like cost, reliability, and safety. Secondly, this research presents a method for predicting future hydrogen demand in order to understand the impact of the proposed station operation strategies on data-driven decision-making for low-impact maintenance scheduling, and optimized control strategies. Finally, based on an analysis indicating the need for improved hydrogen station reliability, the research applies reliability engineering principles to the hydrogen station application through development and evaluation of a prognostic health management system.Item Open Access Scalable and data efficient deep reinforcement learning methods for healthcare applications(Colorado State University. Libraries, 2019) Saripalli, Venkata Ratnam, author; Anderson, Charles W., advisor; Hess, Ann Marie, committee member; Young, Peter, committee member; Simske, Steve John, committee memberArtificial intelligence driven medical devices have created the potential for significant breakthroughs in healthcare technology. Healthcare applications using reinforcement learning are still very sparse as the medical domain is very complex and decision making requires domain expertise. High volumes of data generated from medical devices – a key input for delivering on the promise of AI, suffers from both noise and lack of ground truth. The cost of data increases as it is cleaned and annotated. Unlike other data sets, medical data annotation, which is critical for accurate ground truth, requires medical domain expertise for a high-quality patient outcome. While accurate recommendation of decisions is vital in this context, making them in near real-time on devices with computational resource constraint requires that we build efficient, compact representations of models such as deep neural networks. While deeper and wider neural networks are designed for complex healthcare applications, model compression can be an effective way to deploy networks on medical devices that often have hardware and speed constraints. Most state-of-the-art model compression techniques require a resource centric manual process that explores a large model architecture space to find a trade-off solution between model size and accuracy. Recently, reinforcement learning (RL) approaches are proposed to automate such a hand-crafted process. However, most RL model compression algorithms are model-free which require longer time with no assumptions of the model. On the contrary, model-based (MB) approaches are data driven; have faster convergence but are sensitive to the bias in the model. In this work, we report on the use of reinforcement learning to mimic the decision-making process of annotators for medical events, to automate annotation and labelling. The reinforcement agent learns to annotate alarm data based on annotations done by an expert. Our method shows promising results on medical alarm data sets. We trained deep Q-network and advantage actor-critic agents using the data from monitoring devices that are annotated by an expert. Initial results from these RL agents learning the expert-annotated behavior are encouraging and promising. The advantage actor-critic agent performs better in terms of learning the sparse events in a given state, thereby choosing more right actions compared to deep Q-network agent. To the best of our knowledge, this is the first reinforcement learning application for the automation of medical events annotation, which has far-reaching practical use. In addition, a data-driven model-based algorithm is developed, which integrates seamlessly with model-free RL approaches for automation of deep neural network model compression. We evaluate our algorithm on a variety of imaging data from dermoscopy to X-ray on different popular and public model architectures. Compared to model-free RL approaches, our approach achieves faster convergence; exhibits better generalization across different data sets; and preserves comparable model performance. The new RL methods' application to healthcare domain from this work for both false alarm detection and model compression is generic and can be applied to any domain where sequential decision making is partially random and practically controlled by the decision maker.Item Open Access Long duration measurements of pneumatic controller emissions on onshore natural gas gathering stations(Colorado State University. Libraries, 2019) Luck, Benjamin Kendell, author; Quinn, Jason, advisor; Zimmerle, Daniel, advisor; Marchese, Anthony, committee member; von Fischer, Joseph, committee memberOver the last 15 years, advances in hydraulic fracturing have led to a boom of natural gas production the United States and abroad. The combustion of natural gas produces less carbon dioxide (CO2) than the combustion of other fossil fuels per unit of energy released, making it an attractive option for reducing emissions from power generation and transportation industries. Uncombusted methane (CH4) has a global warming potential (GWP) of 86 times that of CO2 on 20 year time scales and a GWP of global warming potential 32 times greater than CO2 on a 100 year time scale. The increase in supply chain throughput has led to concerns regarding the greenhouse gas contributions of CH4 from accidental or operational leaks from natural gas infrastructure. Automated, pneumatic actuated valves are used to control process variables on stations in all sectors of the natural gas industry. Pneumatic valve controllers (PCs) vent natural gas to the atmosphere during their normal operation and are a significant source of fugitive emissions from the natural gas supply chain. This paper outlines the work that was done to improve the characterization of emissions from PCs using long duration measurements. This work was performed as part of the Department of Energy funded Gathering Emission Factor (GEF) study. A thermal mass flow meter based emission measurement system was developed to perform direct measurements of pneumatic controller emissions over multiday periods. This measurement system was developed based on methods used in previous studies, with design modifications made to meet site safety regulations, power supply constraints and measurement duration targets. Emissions were measured from 72 PCs at 16 gathering compressor stations between June, 2017 and May, 2018. The average emission rate of 72 PCs was 10.86 scfh [+4.31/-3.60], which is 91.2% of the EPA's current emission factor for PCs on gathering compressor stations. The mean measurement duration of these 72 samples was 76.8 hours. Due to potential biases associated with flow meter errors, updates to EPA emission factors based on these data are not proposed. However, because all previous studies to quantify PC emissions used short sampling times (typically ≤15 minutes) the long duration measurements provided insight into previously unobserved PC emissions behavior. A panel of industry experts assessed the emissions recordings and found that 30 PCs (42% of measured devices) had emissions patterns or rates that were inconsistent with their design. 73% of emissions measured during this study were attributed to these 30 PCs that were malfunctioning from an emissions perspective. It was also found that PC emission rates are more variable over time than previously thought. Due to this high temporal variability, the short duration observations currently used by leak detection programs to identify malfunctioning equipment have a low probability of providing accurate characterizations of PC emissions. Many natural gas companies are investigating ways to improve the efficiency of their operations and reduce rates of natural gas leakage in their systems. The data presented in this paper improves the characterization of emissions behavior from a significant emission source in the production, processing and transmission sectors of the natural gas supply chain and has implications for organizations with an interest in reducing emissions from PCs.Item Open Access The systems engineering casualty analysis simulation (SE-CAS)(Colorado State University. Libraries, 2019) Creary, Andron Kirk, author; Sega, Ron, advisor; Reisfeld, Brad, committee member; Young, Peter, committee member; Bradley, Thomas, committee memberIn this dissertation, we illustrate the use of the systems engineering casualty analysis simulation (SE-CAS). SE-CAS, inspired by the Army's need to detect, identify and operate in areas contaminated by Chemical Warfare Agent (CWA), is a framework for creating chemical warfare simulations. As opposed to existing simulations which emulate simple cause-and-effect relationships, SE-CAS is developed using a systems thinking approach to dynamically represent interconnected elements during weaponized release of CWA. Through use of monte-carlo simulation methods, integrated dynamic analytic models, and NASA WorldWind® global display, SE-CAS provides the capability to visualize areas of chemical warfare agent dispersion, symptomology and exposure effects, and prescription of optimal survival factors within a common constructive environment. Supported by Colorado State University's Walter Scott Jr. School of Engineering and industry affiliates, SE-CAS is part of a larger research & development effort to expand industry modeling, simulation and analysis capabilities within Chemical, Biological, Radiological, Nuclear and Explosives (CBRN-E) discipline. SE-CAS is an open, parameterized simulation allowing the user to set initial conditions, simulation mode, parameters, and randomized inputs through a scenario editor. Inputs are passed through the simulation components and service layers. This includes: processor logic, simulation management, visualization and observer services. Data output is handled within the simulation display, as well as in text format for easy back-end analysis. The contributions of this dissertation: advanced the state of the systems engineering practice in modeling, simulation and analysis of chemical warfare agents during simulated military operations, created a robust systems engineering framework for creating chemical warfare simulations that is modular and customizable, developed a practical software solution to fill gaps in CBRN-E M&S tool offerings, integration of newly created dynamic models compatible with CBRN-E platforms, and formulated a roadmap for the application of Live, Virtual and Constructive training and operational planning for joint warfare integrated systems assessments.Item Open Access Disaggregation of net-metered advanced metering infrastructure data to estimate photovoltaic generation(Colorado State University. Libraries, 2019) Stainsby, Wendell Jay, author; Young, Peter, advisor; Zimmerle, Daniel, committee member; Aloise-Young, Patricia, committee memberAdvanced metering infrastructure (AMI) is a system of smart meters and data management systems that enables communication between a utility and a customer's premise, and can provide real time information about a solar array's production. Due to residential solar systems typically being configured behind-the-meter, utilities often have very little information about their energy generation. In these instances, net-metered AMI data does not provide clear insight into PV system performance. This work presents a methodology for modeling individual array and system-wide PV generation using only weather data, premise AMI data, and the approximate date of PV installation. Nearly 850 homes with installed solar in Fort Collins, Colorado, USA were modeled for up to 36 months. By matching comparable periods of time to factor out sources of variability in a building's electrical load, algorithms are used to estimate the building's consumption, allowing the previously invisible solar generation to be calculated. These modeled outputs are then compared to previously developed white-box physical models. Using this new AMI method, individual premises can be modeled to agreement with physical models within ±20%. When modeling portfolio-wide aggregation, the AMI method operates most effectively in summer months when solar generation is highest. Over 75% of all days within three years modeled are estimated to within ±20% with established methods. Advantages of the AMI model with regard to snow coverage, shading, and difficult to model factors are discussed, and next-day PV prediction using forecasted weather data is also explored. This work provides a foundation for disaggregating solar generation from AMI data, without knowing specific physical parameters of the array or using known generation for computational training.