Browsing by Author "Simske, Steve, committee member"
Now showing 1 - 20 of 25
Results Per Page
Sort Options
Item Open Access An analysis of Internet of Things (IOT) ecosystem from the perspective of device functionality, application security and application accessibility(Colorado State University. Libraries, 2022) Paudel, Upakar, author; Ray, Indrakshi, advisor; Malaiya, Yashwant, committee member; Simske, Steve, committee memberInternet of Thing (IoT) devices are being widely used in smart homes and organizations. IoT devices can have security vulnerabilities in different fronts: Device front with embedded functionalities and Application front. This work aims to analyze IoT devices security health from device functionality perspective and application security and accessibility perspective to understand holistic picture of entire IoT ecosystem's security health. An IoT device has some intended purposes, but may also have hidden functionalities. Typically, the device is installed in a home or an organization and the network traffic associated with the device is captured and analyzed to infer high-level functionality to the extent possible. However, such analysis is dynamic in nature, and requires the installation of the device and access to network data which is often hard to get for privacy and confidentiality reasons. In this work, we propose an alternative static approach which can infer the functionality of a device from vendor materials using Natural Language Processing (NLP) techniques. Information about IoT device functionality can be used in various applications, one of which is ensuring security in a smart home. We can also use the device functionalities in various security applications especially access control policies. Based on the functionality of a device we can provide assurance to the consumer that these devices will be compliant to the home or organizational policy even before they have been purchased. Most IoT devices interface with the user through mobile companion apps. Such apps are used to configure, update, and control the device(s) constituting a critical component in the IoT ecosystem, but they have historically been under-studied. In this thesis, we also perform security and accessibility analysis of IoT application on 265 apps to understand security and accessibility vulnerabilities present in the apps and identify some mitigating strategies.Item Open Access Applying model-based systems engineering in search of quality by design(Colorado State University. Libraries, 2022) Miller, Andrew R., author; Herber, Daniel R., advisor; Bradley, Thomas, committee member; Miller, Erika, committee member; Simske, Steve, committee member; Yalin, Azer P., committee memberModel-Based System Engineering (MBSE) and Model-Based Engineering (MBE) techniques have been successfully introduced into the design process of many different types of systems. The application of these techniques can be reflected in the modeling of requirements, functions, behavior, and many other aspects. The modeled design provides a digital representation of a system and the supporting development data architecture and functional requirements associated with that architecture through modeling system aspects. Various levels of the system and the corresponding data architecture fidelity can be represented within MBSE environment tools. Typically, the level of fidelity is driven by crucial systems engineering constraints such as cost, schedule, performance, and quality. Systems engineering uses many methods to develop system and data architecture to provide a representative system that meets costs within schedule with sufficient quality while maintaining the customer performance needs. The most complex and elusive constraints on systems engineering are defining system requirements focusing on quality, given a certain set of system level requirements, which is the likelihood that those requirements will be correctly and accurately found in the final system design. The focus of this research will investigate specifically the Department of Defense Architecture Framework (DoDAF) in use today to establish and then assess the relationship between the system, data architecture, and requirements in terms of Quality By Design (QbD). QbD was first coined in 1992, Quality by Design: The New Steps for Planning Quality into Goods and Services [1]. This research investigates and proposes a means to: contextualize high-level quality terms within the MBSE functional area, provide an outline for a conceptual but functional quality framework as it pertains to the MBSE DoDAF, provides tailored quality metrics with improved definitions, and then tests this improved quality framework by assessing two corresponding case studies analysis evaluations within the MBSE functional area to interrogate model architectures and assess quality of system design. Developed in the early 2000s, the Department of Defense Architecture Framework (DoDAF) is still in use today, and its system description methodologies continue to impact subsequent system description approaches [2]. Two case studies were analyzed to show proposed QbD evaluation to analyze DoDAF CONOP architecture quality. The first case study addresses the analysis of DoDAF CONOP of the National Aeronautics and Space Administration (NASA) Joint Polar Satellite System (JPSS) ground system for National Oceanic and Atmospheric Administration (NOAA) satellite system with particular focus on the Stored Mission Data (SMD) mission thread. The second case study addresses the analysis of DoDAF CONOP of the Search and Rescue (SAR) navel rescue operation network System of Systems (SoS) with particular focus on the Command and Control signaling mission thread. The case studies help to demonstrate a new DoDAF Quality Conceptual Framework (DQCF) as a means to investigate quality of DoDAF architecture in depth to include the application of DoDAF standard, the UML/SysML standards, requirement architecture instantiation, as well as modularity to understand architecture reusability and complexity. By providing a renewed focus on a quality-based systems engineering process when applying the DoDAF, improved trust in the system and data architecture of the completed models can be achieved. The results of the case study analyses reveal how a quality-focused systems engineering process can be used during development to provide a product design that better meets the customer's intent and ultimately provides the potential for the best quality product.Item Embargo Automated extraction of access control policy from natural language documents(Colorado State University. Libraries, 2023) Alqurashi, Saja, author; Ray, Indrakshi, advisor; Ray, Indrajit, committee member; Malaiya, Yashwant, committee member; Simske, Steve, committee memberData security and privacy are fundamental requirements in information systems. The first step to providing data security and privacy for organizations is defining access control policies (ACPs). Security requirements are often expressed in natural languages, and ACPs are embedded in the security requirements. However, ACPs in natural language are unstructured and ambiguous, so manually extracting ACPs from security requirements and translating them into enforceable policies is tedious, complex, expensive, labor-intensive, and error-prone. Thus, the automated ACPs specification process is crucial. In this thesis, we consider the Next Generation Access Control (NGAC) model as our reference formal access control model to study the automation process. This thesis addresses the research question: How do we automatically translate access control policies (ACPs) from natural language expression to the NGAC formal specification? Answering this research question entails building an automated extraction framework. The pro- posed framework aims to translate natural language ACPs into NGAC specifications automatically. The primary contributions of this research are developing models to construct ACPs in NGAC specification from natural language automatically and generating a realistic synthetic dataset of access control policies sentences to evaluate the proposed framework. Our experimental results are promising as we achieved, on average, an F1-score of 93 % when identifying ACPs sentences, an F1-score of 96 % when extracting NGAC relations between attributes, and an F1-score of 96% when extracting user attribute and 89% for object attribute from natural language access control policies.Item Open Access Big Data decision support system(Colorado State University. Libraries, 2022) Ma, Tian J., author; Chong, Edwin, advisor; Simske, Steve, committee member; Herber, Daniel, committee member; Pezeshki, Ali, committee memberEach day, the amount of data produced by sensors, social and digital media, and Internet of Things is rapidly increasing. The volume of digital data is expected to be doubled within the next three years. At some point, it might not be financially feasible to store all the data that is received. Hence, if data is not analyzed as it is received, the information collected could be lost forever. Actionable Intelligence is the next level of Big Data analysis where data is being used for decision making. This thesis document describes my scientific contribution to Big Data Actionable Intelligence generations. Chapter 1 consists of my colleagues and I's contribution in Big Data Actionable Intelligence Architecture. The proven architecture has demonstrated to support real-time actionable intelligence generation using disparate data sources (e.g., social media, satellite, newsfeeds). This work has been published in the Journal of Big Data. Chapter 2 shows my original method to perform real-time detection of moving targets using Remote Sensing Big Data. This work has also been published in the Journal of Big Data and it has received an issuance of a U.S. patent. As the Field-of-View (FOV) in remote sensing continues to expand, the number of targets observed by each sensor continues to increase. The ability to track large quantities of targets in real-time poses a significant challenge. Chapter 3 describes my colleague and I's contribution to the multi-target tracking domain. We have demonstrated that we can overcome real-time tracking challenges when there are large number of targets. Our work was published in the Journal of Sensors.Item Open Access Detecting non-secure memory deallocation with CBMC(Colorado State University. Libraries, 2021) Singh, Mohit K., author; Prabhu, Vinayak, advisor; Ray, Indrajit, advisor; Ghosh, Sudipto, committee member; Ray, Indrakshi, committee member; Simske, Steve, committee memberScrubbing sensitive data before releasing memory is a widely recommended but often ignored programming practice for developing secure software. Consequently, sensitive data such as cryptographic keys, passwords, and personal data, can remain in memory indefinitely, thereby increasing the risk of exposure to hackers who can retrieve the data using memory dumps or exploit vulnerabilities such as Heartbleed and Etherleak. We propose an approach for detecting a specific memory safety bug called Improper Clearing of Heap Memory Before Release, referred to as Common Weakness Enumeration 244. The CWE-244 bug in a program allows the leakage of confidential information when a variable is not wiped before heap memory is freed. Our approach uses the CBMC model checker to detect this weakness and is based on instrumenting the program using (1) global variable declarations that track and monitor the state of the program variables relevant for CWE-244, and (2) assertions that help CBMC to detect unscrubbed memory. We develop a tool, SecMD-Checker, implementing our instrumentation based algorithm, and we provide experimental validation on the Juliet Test Suite that the tool is able to detect all the CWE-244 instances present in the test suite. The proposed approach has the potential to work with other model checkers and can be extended for detecting other weaknesses that require variable tracking and monitoring, such as CWE-226, CWE-319, and CWE-1239.Item Embargo Determining systems engineering value in competitive bids(Colorado State University. Libraries, 2023) Dawson, Sandra Lynn, author; Batchelor, Ann, advisor; Arenson, David, committee member; Adams, James, committee member; Simske, Steve, committee member; Wise, Dan, committee memberCorporations need a methodology to determine existing and new Systems Engineering (SE) effort costs in a more relevant context through deepening its connection within the competitive bidding process. The impact of Digital Engineering (DE) on SE within competitive bids is evolving as the industry is maturing its DE transition and implementation. The state of the field does not consider the impact of the current transition from traditional document-based SE (TDSE) to digital engineering (DE) and the impact on SE value. This paper examines the effectiveness of the SE costing models that are available in the literature by introducing a process to compare completed projects using metrics of actual SE hours expended and project performance against recommended SE effort and project results. Analysis of this comparison provides justification for SE effort bid ranges and associated project results. This research endeavors to enable a more holistic and SE-centric view of SE costing with considerations of project characteristics and the ongoing DE transition. Finally, this research provides a new framework for the analysis results and references useful in the bidding of SE projects where SE bid options can be associated with project performance, DE transition progress, and references relevant to the competitive bid approach. By applying systems thinking, using feedback loops and data from multiple organizations, understanding SE-DE impact, and empowering engineers in the DE transition, these research results enable data-driven decisions to determine SE value in competitive bids and to optimize SE using risk management. Following this process and using an organization's data (for competitive bids and projects) will yield results specific to competitive bids, bid technical approach, and DE transition progress. These results are communicated to the competitive bids teams using a SE focused framework.Item Open Access Evaluating the sustainability performance of U.S. biofuel in 2017 with an integrated techno-economic and life cycle assessment framework(Colorado State University. Libraries, 2022) Smith, Jack Philip, author; Quinn, Jason, advisor; Simske, Steve, committee member; Bandhauer, Todd, committee memberThe United States produced more than 66.2 million m3 of biofuel for the transportation industry in 2017. Most of that volume (60.6 million m3) was produced in the form of corn ethanol and the majority of the remaining volume (4.2 million m3) was produced in the form of soybean-based biodiesel. Numerous works have assessed the economic and environmental performance of these two biofuel types. However, no work exists which evaluates both the economic and environmental outcomes of these two fuels with adequate geospatial resolution and national scope. In this study, a model framework is constructed that performs concurrent Techno-Economic Analysis (TEA) and Life Cycle Assessment (LCA) using high-resolution input datasets to provide a granular estimation of sustainability performance of every county in the United States. This work presents results that include sector wide estimates and highlights the importance of capturing geographic heterogeneity. Results show a total emission volume of 55 MMT CO2-eq produced by the 2017 US biofuel industry, with 7 MMT CO2-eq of that amount resulting from Land Use Change effects. Nationwide weighted mean Global Warming Potential results are 38 gCO2-eq/MJ and 37 gCO2-eq/MJ for corn ethanol and soybean biodiesel, respectively, when Land Use Change emissions are included. Minimum Fuel Selling Price results are $0.0208/MJ ($2.52/GGE) and $0.0225/MJ ($2.72/GGE) for corn ethanol and soybean biodiesel, respectively. A Zero-Emissions Cost (ZEC) metric is applied, which combines the economic and environmental performance of a fuel into its analysis. Specifically, the cost associated with offsetting all fuel production and use emissions through Direct Air Capture (DAC) is added to the standard price of the fuel. Mean ZEC results are $0.037/MJ ($4.53/GGE) for corn ethanol and $0.039/MJ ($4.69/GGE) for soybean biodiesel which are lower than the ZEC of conventional gasoline of $0.062/MJ ($7.45/GGE). Finally, the cost of Direct Air Capture which results in ZEC parity between each biofuel and its petroleum-based counterpart is assessed to be $49/MT CO2-eq.Item Open Access Exploration based design methodology using the theory of constraints in extending plastics manufacturing for novel high performing fabrics(Colorado State University. Libraries, 2022) Shekoni, Aderemi, author; Troxell, Wade, advisor; Simske, Steve, committee member; Young, Peter, committee member; Prieto, Amy, committee memberThe world of textiles is comprised of several materials. From the conventional, such as cotton and silk, to the contemporary, such as polyester and nylon, textiles have changed over time. Nonwovens, a category of material frequently referred to as the "third-generation" of textiles, have emerged as one of the most exciting breakthroughs in the textile industry during the past few years. Nonwovens, which are frequently confused with fibers, yarns, and fabrics, have evolved as a new category of versatile material with medicinal and industrial applications. An issue associated with the use of lightweight nonwovens is their single-use, in which a fabric weight category can be employed for only one product. The number of products per weight class that can be utilized in businesses that utilize the materials is limited. Therefore, companies utilizing these textiles in their operations must engage with plastic producers to plan, implement, and develop a single weight class for a single product. This procedure is time-consuming and generates plastic waste because of unfinished fabrics. By creating a multipurpose nonwoven fabric, organizations will be able to improve their operations by saving time and energy, improving profits, decreasing plastic waste, and enabling process innovation. To use a fabric with the same weight and similar physical properties in a different product, a different fabric is manufactured for that process, despite the similarity in weight and physical properties between the fabric used in the previous process and the fabric needed for the new process. Due to this limitation, the concept of redesigning nonwoven materials for different applications was conceived. Air Permeability, a barrier to airflow, is a significant component in the inability to support numerous uses. When a fabric's desired attribute is not satisfied, the fabric's air permeability can be optimized by utilizing a variety of process approaches to attain the appropriate performance qualities. This permits the use of a single fabric in a variety of items. Due to the fabric's weight and volume, the usage of nonwoven in aviation and public works has expanded drastically. Thermal insulation is one of the most prevalent applications of nonwoven materials in the aviation industry. Nonwoven fabrics are also utilized as dynamic biofilters for filtration in public works, with an aerobic layer that aids in the recovery of alkalinity in the filtration systems used in these facilities. The two significant outcomes of this research are (1) Improvement of the airflow barrier, also known as air permeability (AP), which enables the use of a single weight class to make several goods as opposed to a single weight class for a single product, and the addition of a thermal barrier to the fabric. Permeability enhancements in nonwovens enhance the fabric's sound absorption, filtration, and heat absorption. (2) The capacity to recycle undesired nonwoven fabrics following production, as opposed to disposing of the plastic components in landfills. Nonwovens are semi-crystalline polypropylene plastics that are not easily biodegradable due to the strong chemical bond between the polypropylene polymers. Because polypropylenes, which are plastics, are not biodegradable, unused nonwoven fabrics are landfilled. It was through the process of prototyping that a subsystem alteration was made that enabled the development of nonwoven fabric with better air permeability. Design as Exploration concepts are used to accomplish this. Reicofil I, II, III, and IV are the four nonwoven production systems used in this research to develop the novel fabric. In addition, this study has handled another issue by reusing and recycling unwanted fabrics to reduce the amount of plastic waste in landfills. An extrusion method that recycles rejected and waste fabrics were the result of these approaches. The innovative method used in developing the new nonwoven fabric is being explored for use in the production of plastic films to improve the quality of goods made with polyethylene plastic polymers.Item Open Access Framework for optimizing survivability in complex systems(Colorado State University. Libraries, 2024) Younes, Megan Elizabeth, author; Cale, James, advisor; Gallegos, Erika, committee member; Simske, Steve, committee member; Gaofeng, Jia, committee memberIncreasing high probability low frequency events such as extreme weather incidents in combination with aging infrastructure in the United States puts the nation's critical infrastructure such as hydroelectric dams' survivability at risk. Maximizing resiliency in complex systems can be viewed as a multi-objective optimization that includes system performance, survivability, economic and social factors. Systems requiring high survivability: a hydroelectric dam, typically require one or more redundant (standby) subsystems, which increases system cost. To optimize the tradeoffs between system survivability and cost, this research introduces an approach for obtaining the Pareto-optimal set of design candidates ("resilience frontier"). The method combines Monte Carlo (MC) sampling to estimate total survivability and a genetic algorithm (GA), referred to as the MCGA, to obtain the resilience frontier. The MCGA is applied to a hydroelectric dam to maximize overall system survivability. The MCGA is demonstrated through several numerical case studies. The results of the case studies indicate that the MCGA approach shows promise as a tool for evaluating survivability versus cost tradeoffs and also as a potential design tool for choosing system configuration and components to maximize overall system resiliency.Item Open Access Fully integrated network of networks(Colorado State University. Libraries, 2022) LaMar, Suzanna, author; Jayasumana, Anura, advisor; Cale, Jim, committee member; Guo, Yanlin, committee member; Simske, Steve, committee memberThere are many different facets to developing a fully integrated network of networks system that can facilitate seamless information exchange between nodes within a complex network topology. As an example, individual link resiliency, enhanced waveform capabilities, spectral and spatial diversity are all critical features in providing communications that can enable connectivity and interoperability for a fully networked system extending into multiple domains (ground, surface, air, and space). Steps taken toward achieving such an architecture are introduced with emerging millimeter wave (mmW) and high-band antenna technologies that can be integrated with future tactical multifunction software defined radios (SDRs) to enable information distribution between vital networked participants, including 5th generation aircraft. Small, lightweight mmW and high-band antenna designs that will enable small unit tactical operations to persist under electronic warfare conditions will be discussed. These small units are typically fielded with multiple communications radios but are limited in function and do not enable rapid communication on the move, or high-capacity data transfers at the halt. Additionally, a revolutionary cognitive antenna (CA) is introduced where artificial intelligence (AI) techniques are proposed to aid in improving antenna functions, support self-healing attributes, and promote autonomous communication operations. A CA designed for future spacecraft (S/C) communications systems that is environmentally perceptive will be presented where it can sense and transmit radio frequency (RF) signals and cooperate with a cognitive radio (CR) to modify waveform and beam pattern characteristics for enhanced resiliency and communications. As an extrapolation to interoperability and information exchanges, data must be always secured. Common communications payload security architectures are presented as a basis for offering data protection to not only the system itself, but also to networks that are part of the larger enterprise solution. Similarly, machine learning methods are proposed to combat malicious cyber-attacks within an enterprise security space-based communications architecture to offer a more resilient, protective adaptive framework. Additionally, the machine learning algorithms seek to provide a viable solution for identifying, classifying, and detecting possible intrusions in a highly dynamic environment. Machine learning is also applied to networking strategies to predict congestion before it happens; thereby, preventing bottlenecks within the network. This is especially important for critical, high-value information. A CONgestion Aware Intent-based Routing (CONAIR) architecture that facilitates faster and more reliable data exchanges between end users is proposed. The CONAIR architecture leverages platform and mission information to derive quality of service (QoS) metrics that can be used to support network route optimizations by using a network controller (NC) with machine learning to predict future network behaviors. Finally, the CA, multifunction SDRs and NC subsystems are integrated into a robust architecture on unmanned aerial vehicles (UAVs) to form collaborative cognitive communications systems that are responsive to stressing operating conditions. Through collaborative behaviors and interactions, communications can be optimized. These discriminating technologies support the continued ambition for maturating military communications systems to benefit cooperative interactions and information exchanges between various users in multi-hop, complex networks.Item Open Access Human systems integration of agricultural machinery in developing economy countries: Sudan as a case study(Colorado State University. Libraries, 2022) Ahmed, Hamza, author; Miller, Erika, advisor; Owiny, James, committee member; Simske, Steve, committee member; Jablonski, Becca, committee member; Herber, Daniel, committee memberWidespread adoption of agricultural machinery for developing economy countries is commonly regarded as a fundamental component of pro-poor growth and sustainable intensification. Mechanized farming can also improve perceptions of farming and mitigate rural-out migration. However, many traditional farmers do not have access to machinery and/or the machinery is cost prohibitive. This study applies the systems engineering approach to identify human-systems integration (HSI) solutions in agricultural practices to more effectively adapt technologies to satisfy traditional farmers' needs. A treatment control study was conducted on 36 farms in Sudan, Africa, over three farming seasons: 2019 (baseline), 2020, and 2021. The treatment group farmers (N = 6) were provided with agricultural machinery (i.e., tractor, cultivator, planter, and harvester), fuel for the machinery, and training to use the machinery. Farmers were interviewed at the beginning of the study and then after each planting and harvesting season during the study. Findings show that the most significant barriers for technology adoption were culture, security, and maintenance costs. However, they also reported that the most significant challenges in their nonmechanized farming practices were related to labor, safety, and profit margins, all of which could be addressed with machinery. Moreover, the results show that all farmers had similar net profits in 2019, when farming without machinery, while mechanized farming yielded significantly higher net profits ($16.61 per acre more in 2020 and $27.10 per acre more in 2021). Farmers also provided needs and rationales of various design options in tractors and attachments. The findings of this dissertation suggest that, despite the initial resistance to using agricultural machinery, the farmers were pleased by their experience after using farming machinery and expressed an even more accepting attitude from their children towards this new farming process. These results demonstrate the importance of developing effective solutions for integrating farming technology into rural farming practices in developing economy countries. More broadly, this study can be used as an HSI framework for identifying design needs and integrating technology into users' lifestyle. The results presented in this dissertation provide a quantified difference between farming with and without machinery, which can provide a financial basis for purchasing and borrowing models, machinery design requirements, and educational value to farmers. Further, the financial values and design requirements can help inform farmers regarding expected costs, returns, and payoffs from tractor adoption. Manufacturers and policymakers can utilize this to promote technology adoption more effectively to farmers in developing economy countries.Item Open Access Investigation of liquid cooling on M9506A high density Keysight AXIE chassis(Colorado State University. Libraries, 2021) Gilvey, Zachary Howard, author; Bandhauer, Todd M., advisor; Marchese, Anthony, committee member; Simske, Steve, committee memberForced convection air-cooled heat sinks are the dominant cooling method used in the electronics industry, accounting for 86% of high-density cooling in data centers. However, the continual performance increases of electronics equipment are pushing these air-cooled methods to their limit. Fundamental limitations such as acoustics, cooling power consumption, and heat transfer coefficient are being reached while processor power consumption is steadily rising. In this study, a 4U, 5-slot, high density computing box is studied to determine the maximum heat dissipation in its form factor while operating at an ambient air temperature of 50°C. Two liquid cooling technologies were analyzed in this effort and compared against current state-of-the-art air-cooled systems. A new configuration proposed using return jet impingement with dielectric fluid FC72 directly on the integrated circuit die shows up to a 44% reduction in thermal resistance as compared to current microchannel liquid cooled systems, 0.08 K W-1, vs 0.144 K W-1, respectively. In addition, at high ambient temperatures (~45°C), the radiator of the liquid cooled system accounts for two thirds of the thermal resistance from ambient to junction temperature, indicating that a larger heat exchanger outside the current form factor could increase performance further. The efficiency of the chips was modeled with efficiency predictions based on their junction temperature. On a system level, the model showed that by keeping the chassis at 25°C ambient, the overall power consumption was significantly lower by 500W. Furthermore, the failure rate was accounted for when the chip junction temperature was beyond 75°C. FC72 jet impingement on the die showed the best performance to meet the system cooling requirements and kept the chips below 75°C for the highest ambient temperatures but consumed the most pumping power of all of the fluids and configurations investigated. The configuration with microchannels bypassing TIM 2 showed near the same performance as jet impingement with water on the lid and reduced the junction temperature difference by 5°C when compared to baseline. When the fluid was switched from water to a water glycol 50/50 mixture, an additional thermal resistance of 0.010 K W-1 was recorded at the heat sink level and a higher mass flow rate was required for the GC50/50 heat exchanger to achieve its minimum thermal resistance.Item Open Access Optimizing designer cognition relative to generative design methods(Colorado State University. Libraries, 2023) Botyarov, Michael, author; Miller, Erika, advisor; Bradley, Thomas, committee member; Forrest, Jeffrey, committee member; Moraes, Marcia, committee member; Simske, Steve, committee member; Radford, Donald, committee memberGenerative design is a powerful tool for design creation, particularly for complex engineering problems where a plethora potential design solutions exist. Generative design systems explore the entire solution envelope and present the designer with multiple design alternatives that satisfy specified requirements. Although generative design systems present design solutions to an engineering problem, these systems lack consideration for the human element of the design system. Human cognition, particularly cognitive workload, can be hindered when presented with unparsed generative design system output, thereby reducing the efficiency of the systems engineering life cycle. Therefore, the objective of this dissertation was to develop a structured approach to produce an optimized parsing of spatially different generative design solutions, derived from generative design systems, such that human cognitive performance during the design process is improved. Generative design usability foundation work was conducted to further elaborate on gaps found in the literature in the context of the human component of generative design systems. A generative design application was then created for the purpose of evaluating the research objective. A novel generative design solution space parsing method that leverages the Gower distance matrix and partitioning around medoids (PAM) clustering method was developed and implemented in the generative design application to structurally parse the generative design solution space for the study. The application and associated parsing method were then used by 49 study participants to evaluate performance, workload, and experience during a generative design selection process, given manipulation of both the quantity of designs in the generative design solution space and filtering of parsed subsets of design alternatives. Study data suggests that cognitive workload is lowest when 10 to 100 generative design alternatives are presented for evaluation in the subset of the overall design solution space. However, subjective data indicates a caution when limiting the subset of designs presented, since design selection confidence and satisfaction may be decreased the more limited the design alternative selection becomes. Given these subjective considerations, it is recommended that a generative design solution space consists of 50 to 100 design alternatives, with the proposed clustering parsing method that considers all design alternative variables.Item Open Access Secure CAN logging and data analysis(Colorado State University. Libraries, 2020) Van, Duy, author; Daily, Jeremy, advisor; Simske, Steve, committee member; Papadopoulos, Christos, committee member; Hayne, Stephen, committee memberController Area Network (CAN) communications are an essential element of modern vehicles, particularly heavy trucks. However, CAN protocols are vulnerable from a cybersecurity perspective in that they have no mechanism for authentication or authorization. Attacks on vehicle CAN systems present a risk to driver privacy and possibly driver safety. Therefore, developing new tools and techniques to detect cybersecurity threats within CAN networks is a critical research topic. A key component of this research is compiling a large database of representative CAN data from operational vehicles on the road. This database will be used to develop methods for detecting intrusions or other potential threats. In this paper, an open-source CAN logger was developed that used hardware and software following the industry security standards to securely log and transmit heavy vehicle CAN data. A hardware prototype demonstrated the ability to encrypt data at over 6 Megabits per second (Mbps) and successfully log all data at 100% bus load on a 1 Mbps baud CAN network in a laboratory setting. An AES-128 Cipher Block Chaining (CBC) encryption mode was chosen. A Hardware Security Module (HSM) was used to generate and securely store asymmetric key pairs for cryptographic communication with a third-party cloud database. It also implemented Elliptic-Curve Cryptography (ECC) algorithms to perform key exchange and sign the data for integrity verification. This solution ensures secure data collection and transmission because only encrypted data is ever stored or transmitted, and communication with the third-party cloud server uses shared, asymmetric secret keys as well as Transport Layer Security (TLS).Item Open Access Sustainable recycling of metal machining swarf via spark plasma sintering(Colorado State University. Libraries, 2021) Sutherland, Alexandra E., author; Ma, Kaka, advisor; Sambur, Justin, committee member; Simske, Steve, committee memberIn general, extracting virgin metals from natural resources exerts a significant environmental and economic impact on our earth and society. Production of virgin stainless steels and titanium (Ti) alloys have particularly caused concerns because of the high demands of these two classes of metals across many industries, with low fractions of scraps (less than one-third for steels and one-fourth for Ti alloys) that are currently recirculated back into supply. In addition, the conventional recycling methods for metals require multiple steps and significant energy consumption. With the overarching goal of reducing energy consumption and streamlining recycling practices, the present research investigated the effectiveness of direct reuse of stainless steel swarf and Ti6Al-4V alloy swarf as feedstock for spark plasma sintering (SPS) to make solid bulk samples. The parts made from machining swarf were characterized to tackle material challenges associated with the metal swarf such as irregular shapes and a higher amount of oxygen content. The hypothesis was that while solid bulk parts made from metal swarf would contain undesired pores that degrade mechanical performance, some mechanical properties (e.g., hardness) can be comparable or even outperform the industrial standard counterparts made from virgin materials, because of cold working and grain refinement that occurred to the swarf during machining and the capability of SPS to retain ultrafine microstructures. 304L stainless steel and Ti-6Al-4V (Ti64) alloy swarf were collected directly from machining processes, cleaned, and then consolidated to bulk samples by SPS with or without addition of gas atomized powder. Nanoindentation and Vickers indentation were utilized to evaluate the hardness at two length scales. Ball milling was performed on Ti64 to assess the energy consumption required to effectively convert swarf to varied morphologies. In addition, to provide insight into the macroscale mechanical behavior of the materials made by SPS of recycled swarf, finite element modeling (FEM) was used to predict tensile stress-strain curves and the corresponding stress distributions in the samples. The key findings from my research proved that reuse of austenitic stainless steel chips and Ti64 alloy swarf as feedstock for SPS is an effective and energy efficient approach to recycle metal scraps, compared to the production and use of virgin gas atomized powders, or conventional metal recycling routes. The mechanical performance of the samples made from metal swarf outperformed the relevant industrial standard materials in terms of hardness while the ductility remains a concern due to the presence of pores. Therefore, future work is proposed to continue to address the challenges associated with mechanical performance, including but not limited to, tuning the SPS processing parameters, quantifying an appropriate amount of addition of powder as a sintering aid, and refining the morphology of the swarf by ball milling. It is critical for the health of our planet to always consider the tradeoff between energy consumption and materials performance.Item Open Access System understanding of high pressure die casting process and data with machine learning applications(Colorado State University. Libraries, 2021) Blondheim, David J., Jr., author; Anderson, Charles, advisor; Simske, Steve, committee member; Radford, Donald, committee member; Kirby, Michael, committee memberDie casting is a highly complex manufacturing system used to produce near net shape castings. Although the process has existed for more than hundred years, a systems engineering approach to define the process and the data die casting can generate each cycle has not been completed. Industry and academia have instead focused on a narrow scope of data deemed to be the critical parameters within die castings. With this narrow focus, most of the published research on machine learning within die casting has limited success and applicability in a production foundry. This work will investigate the die casting process from a systems engineering perspective and show meaningful ways of applying machine learning. The die casting process meets the definition of a complex system both in technical definition and in the way that humans interact within the system. From the technical definition, the die casting system is a network structure that is adaptive and can self-organize. Die casting also has nonlinear components that make it dependent on initial conditions. An example of this complexity is seen in the stochastic nature of porosity formation, even when all key parameters are held constant. Die casting is also highly complex due to the human interactions. In manufacturing environments, human's complete visual inspection of castings to label quality results. Poor performance creates misclassification and data space overlap issues that further complicate supervised machine learning algorithms. The best way to control a complex system is to create feedback within that system. For die casting, this feedback system will come from Industry 4.0 connections. A systems engineering approach will define the critical process and then create groups of data in a data framework. This data framework will show the data volume is several orders of magnitude larger than what is currently being used within the industry. With an understanding of the complexity of die cast and a framework of available data, the challenge becomes identifying appropriate applications of machine learning in die casting. The argument is made, and four case studies show, unsupervised machine learning provides value by automatically monitoring the data that can be obtained and identifying anomalies within the die cast manufacturing system. This process control improvement thereby removes the noise from the system, allowing one to gain knowledge about the die casting process. In the end, the die casting industry can better understand and utilize the data it generates with machine learning.Item Open Access Systems engineering analysis and application to the Emergency Response System(Colorado State University. Libraries, 2021) Marzolf, Gregory S., author; Sega, Ronald, advisor; Bradley, Thomas, advisor; Simske, Steve, committee member; van de Lindt, John, committee memberThis research seeks to apply systems engineering methods to build a more effective emergency response system (called the Engineered Emergency Response System – EERS) to minimize adverse impacts and consequences of incidents. Systems engineering processes were used to identify stakeholder needs and requirements, and then systems engineering methodologies were used to build the system. Emphasis was placed on building a more capable engineered system that could handle not only routine emergencies, but also events containing increased complexity, uncertainty, and severity. The resulting EERS system was built on suitability constraints including conformance to the National Response Framework, the National Incident Management System Framework, and the community fragility concept, as well as ease of transformation from the existing system. Empirical data from two complex events in Colorado's El Paso County, the Waldo Canyon Wildland Urban Interface fire in 2012 and the Black Forest Wildland Urban Interface fire in 2013, were used to inform the system's design and operation. These complex and dynamic events were deemed representative of other complex events based on existing publications and research. After the engineered system was built, it was validated: 1) using the Functional Dependency Network Analysis model with data obtained from the two fires, 2) evaluating best practices that were integrated into the EERS, 3) qualitatively assessing system suitability requirements, and 4) conducting a Delphi study to assess the value of applying systems engineering to this research area; and, the feasibility of implementing the EERS into existing systems. The validation provided evidence that the EERS is more effective than the existing system while showing that it is also suitable and feasible. The Delphi study provided evidence that using the systems engineering approach was deemed valuable by the subject matter experts. More research is needed to determine system needs and capabilities for specific communities in consideration of their unique organizations, cultures, environments, and associated hazards, and in areas of command and control and communications.Item Open Access Techno-economic analysis of advanced small modular nuclear reactors(Colorado State University. Libraries, 2022) Asuega-Souza, Anthony, author; Quinn, Jason, advisor; Simske, Steve, committee member; Bandhauer, Todd, committee memberSmall modular nuclear reactors (SMRs) represent a robust opportunity to develop low-carbon and reliable power with the potential to meet cost parity with conventional power systems. This study presents a detailed, bottom-up economic evaluation of a 12x77 MWe (924 MWe total) light-water SMR (LW-SMR) plant, a 4x262 MWe (1,048 MWe) gas-cooled SMR (GC-SMR) plant, and a 5x200 MWe (1,000 MWe total) molten salt SMR (MS-SMR) plant. Cost estimates are derived from equipment costs, labor hours, material inputs, and process-engineering models. The advanced SMRs are compared to natural gas combined cycle plants and a conventional large reactor. Overnight capital cost (OCC) and levelized cost of energy (LCOE) estimates are developed. The OCC of the LW-SMR, GC-SMR, and MS-SMR are found to be $4,844/kW, $4,355/kW, and $3,985/kW respectively. The LCOE of the LW-SMR, GC-SMR, and MS-SMR are found to be $89.6/MWh, $81.5/MWh, and $80.6/MWh respectively. A Monte Carlo analysis is performed, for which the OCC and construction time of the LW-SMR is found to have a lower mean and standard deviation than a conventional large reactor. The LW-SMR OCC is found to have a mean of $5,233/kW with a standard deviation of $658/kW and a 90% probability of remaining between $4,254/kW and $6,399/kW, while the construction duration is found to have a mean of 4.5 years with a standard deviation of 0.8 years and a 90% probability of remaining between 3.4 and 6.0 years. The economic impact of economies of scale, simplification, modularization, and construction time are evaluated for SMRs. Policy implications for direct capital subsidies and a carbon tax on natural gas emissions are additionally explored.Item Open Access The manufacturing and soft robotic applications of free stroke twisted and coiled actuators(Colorado State University. Libraries, 2022) Tighe, Brandon Z., author; Zhao, Jianguo, advisor; Endeshaw, Haile, committee member; Simske, Steve, committee memberInspired by biological systems (e.g., octopus), soft robots made from soft materials outperform traditional rigid robots in terms of safety and adaptivity because of their compliant and deformable bodies. To enable a soft robot's unique capabilities, they require a key component—the actuator. Many different actuators have been used, including the conventional pneumatic-driven and cable driven methods, and also several emerging approaches, like dielectric elastomers, liquid crystal elastomers, and shape memory alloys. Besides existing actuation approaches, another promising actuator for soft robots is the twisted-and-coiled actuator (TCA), which can be conveniently fabricated by continuously twisting polymer fibers into a coiled spring-like shape. In this thesis, we investigate free stroke TCAs (i.e., TCAs that can produce significant displacements without preloading). We first describe a customized machine that can automatically fabricate TCAs with free strokes by twisting a polymer fiber and then coiling the twisted fiber along a mandrel with a guide channel, which is made by wrapping a small copper wire helically about a larger one. After that, we discuss the characterization and evaluation of the fabricated TCAs. We also apply free stroke TCAs to two different soft robotic systems. The first one is a spherical tensegrity robot which resembles a tensegrity structure, a compliant yet stable structure made of rigid rods and elastic cables. By replacing the elastic cables with TCAs, we can actuate TCAs to shift the robot's center of gravity to generate rolling locomotion. The second application is a shape morphing quadrupedal robot with multiple modes of locomotion. By actively morphing the robot's body shape, we demonstrate different locomotion modes for the same robot, including walking on flat ground, crawling below a gap, and climbing across a bridge. Demonstrations for the tensegrity robot and shape morphing robot will facilitate future biologically inspired adaptive robotic systems to actively adapt their morphologies and behaviors to different environments.Item Embargo Towards automated manufacturing of composites via thermally assisted frontal polymerization(Colorado State University. Libraries, 2024) Jordan, Walter Patrick, author; Yourdkhani, Mostafa, advisor; Zhao, Jianguo, committee member; Simske, Steve, committee memberCurrent methods for the manufacturing and repair of fiber-reinforced thermoset composites are energy-intensive, slow, and costly due to extensive processing steps and expensive equipment required to achieve complete cure. This is especially true for large, complex geometries that require autoclaves and prolonged cure times. As a result, there is a need to develop faster, cost-effective, energy-efficient processes. With the implementation of rapid curing thermoset resins, the cure cycle can be reduced from hours to minutes. This research focuses on the development, implementation, and testing of these resin systems in the established fields of mobile additive manufacturing and filament winding to demonstrate unprecedented, rapid manufacturing of composite parts. Additive manufacturing of fiber-reinforced thermoset composites is desirable due to its inherent ability to produce custom, complex parts quickly, with minimal required tooling. By printing and simultaneously curing the composite as it is deposited, freeform unsupported structures with high mechanical properties can be created. One limitation of current additive manufacturing methods is the print volume associated with traditional gantry style additive manufacturing systems. By combining the highly desirable properties of additive manufacturing using rapid, thermally curable resin systems with the mobility of a mobile additive manufacturing system, large, mechanically sound structures with virtually no limitations on print volume can be created. Moreover, rapid curing thermoset resin systems have the potential to revolutionize traditional composite manufacturing processes. Due to its wide range of applications and its ubiquitous nature, filament winding serves as a natural starting point to do so. Traditional filament winding is typically a two-step manufacturing process, where the composite part is first wound on a rotating mandrel and then cured using autoclaves or ovens. By combining these processes on the winding machine, the labor involved in manufacturing, the energy required for curing, and the overall production time are significantly reduced. In this research, a mobile additive manufacturing robot is designed, validated, and optimized for accurate locomotion and fast, dimensionally accurate printing of composite structures with high fiber alignment and degree of cure. The capabilities of this system are exhibited throughout several demonstrations that involve printing unsupported structures upside-down, the manufacturing of a bridge strong enough for the robot to pass over, and bridging the material across a 60 cm gap. Additionally, a pre-existing filament winding machine is optimized for the manufacturing of large, geometrically unconstrained composite structures. Improvements in fiber volume fraction are achieved through processing changes and a thermal profile for dry fibers is established to facilitate identification of frontal polymerization.