Browsing by Author "Bradley, Thomas H., committee member"
Now showing 1 - 13 of 13
Results Per Page
Sort Options
Item Open Access A multi-functional electrolyte for lithium-ion batteries(Colorado State University. Libraries, 2016) Westhoff, Kevin A., author; Bandhauer, Todd M., advisor; Bradley, Thomas H., committee member; Prieto, Amy L., committee memberThermal management of lithium-ion batteries (LIBs) is paramount for multi-cell packs, such as those found in electric vehicles, to ensure safe and sustainable operation. Thermal management systems (TMSs) maintain cell temperatures well below those associated with capacity fade and thermal runaway to ensure safe operation and prolong the useful life of the pack. Current TMSs employ single-phase liquid cooling to the exterior surfaces of every cell, decreasing the volumetric and gravimetric energy density of the pack. In the present study, a novel, internal TMS that utilizes a multi-functional electrolyte (MFE) is investigated, which contains a volatile co-solvent that boils upon heat absorption in small channels in the positive electrode of the cell. The inert fluid HFE-7000 is investigated as the volatile co-solvent in the MFE (1 M LiTFSI in 1:1 HFE-7000/ethyl methyl carbonate by volume) for the proposed TMS. In the first phase of the study, the baseline electrochemical performance of the MFE is determined by conductivity, electrochemical stability window, half and full cell cycling with lithium iron phosphate (LiFePO4), lithium titanate oxide (Li4Ti5O12), and copper antimonide (Cu2Sb), and impedance spectroscopy measurements. The results show that the MFE containing HFE-7000 has comparable stability and cycling performance to a conventional lithium-ion electrolyte (1 M LiPF6 in 3:7 ethylene carbonate/diethyl carbonate by weight). The MFE-containing cells had higher impedance than carbonate-only cells, indicating reduced passivation capability on the electrodes. Additional investigation is warranted to refine the binary MFE mixture by the addition of solid electrolyte interphase (SEI) stabilizing additives. To validate the thermal and electrochemical performance of the MFE, Cu2Sb and LiFePO4 are used in a full cell architecture with the MFE in a custom electrolyte boiling facility. The facility enables direct viewing of the vapor generation within the channel in the positive electrode and characterizes the galvanostatic electrochemical performance. Test results show that the LiFePO4/Cu2Sb cell is capable of operation even when a portion of the more volatile HFE-7000 is continuously evaporated under an extreme heat flux, proving the concept of a MFE. The conclusions presented in this work inform the future development of the proposed internal TMS.Item Open Access An open source interface for distribution system modeling in power system co-simulation applications and two algorithms for populating feeder models(Colorado State University. Libraries, 2017) Kadavil, Rahul, author; Suryanarayanan, Siddharth, advisor; Siegel, Howard J., committee member; Bradley, Thomas H., committee memberThe aging electric infrastructure power system infrastructure is undergoing a transformative change mainly triggered by the large-scale integration of distributed resources such as distributed generation, hybrid loads, and home energy management systems at the end-use level. The future electric grid, also referred to as the Smart Grid, will make use of these distributed resources to intelligently manage the day to day power system operations with minimum human intervention. The proliferation of these advanced Smart Grid resources may lead to coordination problems to maintain the generation-demand balance at all times. To ensure their safe integration with the grid, extensive simulation studies need to be performed using distributed resources. Simulation studies serve as an economically viable alternative to avoid expensive failures. They also serve as an invaluable platform to study energy consumption behavior, demand response, power system stability, and power system state estimation. Traditionally, power system analysis has been performed in isolated domains using simulation tools for the transmission and distribution systems. Moreover, modeling all the power system assets using a single power system tool is difficult and inconclusive. From the Smart Grid perspective, a common simulation platform for different power systems analysis tools is essential. A co-simulation framework enables the interaction of multiple power system tools, each modeling a single domain in detail, to run simultaneously and provide a holistic power system overview. To enable the co-simulation framework, a data exchange platform between the transmission and distribution system simulators is proposed to model transmission and distribution assets on different simulation testbeds. A graphical user interface (GUI) is developed as a frontend tool for the data exchange platform and makes use of two developed algorithms that simplifies the task of: 1. modeling distribution assets consisting of diverse feeder datasets for the distribution simulator and balanced three-phase level assets for the transmission system simulator, and 2. populating the distribution system with loads having stochastic profiles for timestep simulations. The load profiles used in the distribution system models are created using concepts from one-dimensional random walk theory to mimic the energy consumption behavior of residential class of consumers. The algorithms can simulate large scale distribution system assets linked to a transmission system for co-simulation applications. The proposed algorithms are tested on the standard test system – Roy Billinton Test System (RBTS) to model detailed distribution assets linked to a selected transmission node. Two open source power system simulators—MATPOWER© and GridLAB-D© are used for the transmission and distribution simulation process. The algorithms accurately create detailed distribution topology populated with 4026 residential loads expanded from the transmission node, bus 2 in RBTS. Thus, an automated modeling of power system transmission and distribution assets is proposed along with its application using a standard test system is provided.Item Open Access Characterization of solids in produced water from wells fractured with recycled and fresh water(Colorado State University. Libraries, 2015) Li, Gen, author; Kenneth, Carlson H., advisor; Omur-Ozbek, Pinar, committee member; Bradley, Thomas H., committee memberWater management is a central issue in oil and gas development. Hydraulic fracturing applied in unconventional tight oil and gas development requires large amounts of water, and the wastewater that results after production--containing high levels of organic and inorganic matter-- usually is disposed of through deep well injection. A new approach reuses this produced water as part of subsequent fracturing fluid, an alternative that could significantly reduce both fresh water demand and the cost associated with deep well injection. However, produced water must be treated prior to reuse, to remove most of the suspended solids and multivalent ions that would otherwise cause scale or clogging problems. Understanding the amount and composition of solids in produced water is crucial in achieving optimized treatment and reuse. This study targeted the characterization, both qualitatively and quantitatively, of the solids in produced water from oil and gas operations and the comparison of solids from wells fractured with fresh water and recycled water. Samples were collected from five wells at the Crow Creek and Chandler State pads in the Wattenberg field of Northern Colorado. Wells in the same pad were fractured either with fresh surface water only or with water blended with some portion of recycled produced water. Gravimetric analyses of dissolved and suspended solids were performed, and particle size distributions of suspended solids were measured. Suspended solids also were isolated and characterized with X-ray photoelectron spectroscopy (XPS). Gravimetric analyses showed that total dissolved solids (TDS) averaged about 24000 mg/L and 17000 mg/L for Crow Creek and Chandler State wells, respectively. Total suspended solids (TSS) concentrations were much lower, measuring 550 and 260 mg/L for the two pads. About 9 to25 percent of TDS was volatile and 88 to 99 percent of TSS was highly volatile. Particle sizes were high during first few days of production and then stabilized at about 400 nm and 900 nm for wells on the Crow Creek and Chandler State pads, respectively. At the Crow Creek pad, particle sizes were smaller and mono-distributed in produced water samples collected during the first week of production from the well fractured with recycled water, suggesting that the recycled water was more compatible with shale formation and that wells fractured with recycled water tend to clean out faster. XPS tests for isolated suspended solids showed the presence of major elements such as oxygen, carbon, and silicon, along with minor elements such as calcium, magnesium, zirconium, iron, and others. Core-level scanning confirmed that the isolated suspended solids were mainly composed of carbonate based minerals and metal oxides; several iron compounds with different valences were also found in the produced water samples.Item Open Access Development of graphical user interface tools for optimal fluid management in shale oil and gas operations(Colorado State University. Libraries, 2015) Shoaei, Farnaz, author; Catton, Kimberly B., advisor; Carlson, Kenneth H., advisor; Bradley, Thomas H., committee memberOil and gas extraction is increasing in many parts of the country due to the use of hydraulic fracturing. Hydraulic fracturing is a technique to extract oil and gas from shale rock formations that is characterized by the input of large quantities of pressurized water into horizontal wells. The high pressure fluid generates cracks in the shale formation that release the gas, oil, and other constituents into the fluid. The fluid that returns to the surface is characterized as flowback or produced water. Flowback is defined as the water that returns to the surface prior to the initiation of oil or gas production and produced water refers to the post-production return water. There is widespread public and government agency interest in assessing the quantity and quality of water used in hydraulic fracturing to ensure environmental protection and public health. Optimal water management in hydraulic fracturing has the potential to (1) reduce freshwater use, (2) increase produced water recycle, (3) reduce energy expenditures from water transport, and (4) enhance safety and environmental protection in the development of natural gas and other petroleum resources. Improved management of water can enhance safety and environmental protection by minimizing impacts such as road damage, truck traffic, noise, air pollution, water pollution and landscape disturbance. Interactive management tools allow operators to increase water reuse and minimize the environmental risks of hydraulic fracturing. This research entails developing graphical user tools to optimize water management in shale oil and gas operations. The tools that were developed include (1) a Water Production Modeling Tool, (2) a Water Use Calculator, and (3) a Water Quality Tool. The tools are MATLAB executable files that can run without a MATLAB license. The output of these tools will provide information for users to predict wastewater production, water demand needed for treatment, and analyze water quality components such as contaminant concentrations.Item Open Access Flowback quality characterization for horizontal wells in Wattenberg field(Colorado State University. Libraries, 2013) Jiang, Xi, author; Carlson, Kenneth H., advisor; Omur-Ozbek, Pinar, committee member; Bradley, Thomas H., committee memberThe development of hydraulic fracturing has driven both the need for more fresh water, and also has increased the amount of flowback being produced. Faced with a shortage of usable water, transportation issues, strict environmental regulation and environmental concerns, flowback management is an important topic for oil and gas companies. Recycle and reuse flowback waste is a promising method, since it can simultaneously reduce the need of more fresh water for fracking and decrease the potential environmental issues. Understanding the quality characteristics of flowback is significant for implementing the required treatment of flowback water. Flowback flows back to the surface during and after hydraulic fracturing and often flows for over a period of 3-4 weeks, though most wells finish in seven to 10 days. The fluid contains high total dissolved solids (TDS) and high salinity, and also contains some of the same chemicals that are pumped into wells. The volume of flowback can range from 10%-50% of initial injected fracturing fluid. In our study, sampling time was from March to April 2013 and all the samples were taken separately from Wells Ranch State PC USX #AA16-69-1HNL and Wells Ranch State USX #AA16-68-1HNL. The results in this report used well #68 and well #69. Well #68 was injected with PermStim fracture fluid (injected pH 5.0) and well #69 was injected with SliverStim fracture fluid (injected pH 10.2). Wellhead pressure, temperature, pH, dissolved carbon dioxide (CO2), bicarbonate (HCO3) and dissolved hydrogen sulfide (H2S) were tested in the field once samples were collected. TDS, chloride, sulfate, bicarbonate, aluminum, barium, boron, calcium, iron, magnesium, potassium, silicon, strontium and zirconium were tested by E-Analytics Laboratory. The objective of this paper is to analyze flowback water quality from two horizontal wells, located in the same place, which were injected with two different fracturing fluids. Based on the results of the temporal quality trend, this paper also intends to analyze the impact of different pH on water quality and the possible chemical reactions that occur during drilling and fracturing phases.Item Open Access Investigation of superturbocharger performance improvements through steady state engine simulation(Colorado State University. Libraries, 2010) Whitley, Kevin Lee, author; Olsen, Daniel B., advisor; Bradley, Thomas H., committee member; Zimmerle, Daniel John, committee member; Labadie, John W., committee memberAn integrated supercharger/turbocharger (SuperTurbo) is a device that combines the advantages of a supercharging, turbocharging and turbocompounding while eliminating some of their individual disadvantages. High boost, turbocompounding, and advanced controls are important strategies in meeting impending fuel economy requirements. High boost increases engine power output while many losses remain constant, producing an overall efficiency gain. Turbocompounding increases engine efficiency by capturing excess exhaust turbine power at high speed and torque. Supercharging increases low speed high torque operating performance. Steady state performance gains of a Superturbocharger equipped engine are investigated using engine simulation software. The engine simulation software uses a 1-D wave flow assumption to model the engine's unsteady flow behavior through one dimensional pipes. With these pipes connected to other engine components the overall performance of the engine can be modeled. GT-Power was chosen to run the simulations due to an already correlated engine model being available. This software is used to 'tune' an existing stock engine model to approximate stock engine data over the full speed and torque range. The SuperTurbo is added to the model and simulations are performed over the full engine speed and torque range for direct comparison with the stock engine. The model results show turbocompounding to be most effective at high speeds and torques in the area above 10 bar BMEP in the 3000 - 4000 RPM range and above 5 bar BMEP in the 500 - 6000 RPM range. In addition to turbocompounding there are fuel savings due to the reduced use of the compressor when it is not needed. With the stock configuration there is boost pressure created by compressor power that is then restricted by the throttle in the 2500 RPM range in the 8-12 bar BMEP range on up to 6000 RPM in the 2-10 bar BMEP range. The control of compressor speed to produce no boost at these locations improves efficiency by not wasting energy creating boost that is not needed.Item Open Access Multi-criteria decision-making approach for building maintenance in facility management(Colorado State University. Libraries, 2021) Besiktepe, Deniz, author; Ozbek, Mehmet E., advisor; Atadero, Rebecca A., advisor; Grigg, Neil S., committee member; Bradley, Thomas H., committee member; Valdes-Vasquez, Rodolfo, committee memberFacility Management (FM) encompasses multi-disciplinary processes to ensure the built environment functions properly for its intended use and service. Maintenance practices are critical to sustaining the longevity of the built environment. As buildings continue to age, there is an increasing need for effective maintenance practices and strategies. In addition, cost and financial constraints require enhanced processes in building maintenance decision-making to assure the resources are allocated efficiently to get the best possible outcome. Building maintenance decisions present challenges to FM professionals. These challenges arise from the complexity of building systems as well as the participation of multiple stakeholders in the process, such as the property owner, facility manager, engineer, project supervisors, technicians, and occupants. The overarching goal of this dissertation is to develop a systematic and structured multi-criteria decision-making (MCDM) approach for building maintenance practices in a resource-constrained environment. To do so, this dissertation includes three separate but related studies; each focusing on the essential pieces of the MCDM approach. The first study identified the set of fundamental criteria needed for constructing an MCDM model for FM decision-making utilizing the results of a nationwide survey conducted with the members of the International Facility Management Association (IFMA) and the Leadership in Educational Facilities (APPA) in the United States, two globally recognized FM organizations. The first study also has an exploratory aspect and tries to establish the decision-making and condition assessment practices currently used in FM practices. The second study focused on developing a resource-efficient and quantitative condition assessment (CA) framework to establish a condition rating value. Condition information is essential in the decision-making process of building maintenance; however, financial challenges limit the practice of CA, which currently is mostly based on visual inspections and likely to generate a subjective outcome. Fuzzy sets theory is utilized to obtain a quantitative condition rating value that would be less subjective than that obtained through visual inspections, as fuzzy sets theory deals with imprecise, uncertain, and ambiguous judgments with the membership relations. In the third study, an MCDM method, Choosing by Advantages (CBA), is used to develop a structured and systematic decision-making approach in building maintenance and FM. CBA allows the identification of the most-value generating alternative in the absence of cost and financial constraints, which helps to eliminate the dominancy of financial considerations in the decision-making process. In addition, CBA provides a practical framework to decision-makers in FM with various backgrounds, allowing the participation of multiple stakeholders in the process. This study contributes to the body of knowledge in the FM domain by identifying criteria in the building-maintenance decision-making process, developing a less subjective and quantitative CA framework, and demonstrating an MCDM method for a systematic approach in building-maintenance decision-making. Additionally, this study will benefit FM professionals and decision-makers at all levels by helping to prioritize maintenance activities, justify maintenance budget requests, and support strategic planning.Item Open Access Optimization of a centrifugal electrospinning process using response surface methods and artificial neural networks(Colorado State University. Libraries, 2014) Greenawalt, Frank E., author; Duff, William S., advisor; Bradley, Thomas H., committee member; Labadie, John W., committee member; Popat, Ketul C., committee memberFor complex system designs involving a large number of process variables, models are typically created for evaluating the system behavior for various operating conditions. These models are useful in understanding the effect that various process variables have on the process response(s). Design of Experiments (DOE) and Response Surface Methodology (RSM) are typically used together as an effective approach to optimize a process. RSM and DOE commonly employ first and second order algebraic models. Artificial Neural Networks (ANN) is a more recently developed modeling approach. An evaluation of these three approaches is made in conjunction with experimentation on a newly developed centrifugal electrospinning prototype. The centrifugal electrospinning process is taken from the exploratory design phase through the pre-production phase to determine optimized manufacturing operating conditions. Centrifugal Electrospinning is a sub platform technology to electrospinning for producing nanofibrous materials with a high surface to volume ratio, significant fiber interconnectivity and microscale interstitial spaces. [131] Centrifugal electrospinning is a potentially more cost effective advanced technology which evolved from traditional electrospinning. Despite there being a substantial amount of research in centrifugal electrospinning, there are still many aspects of this complex process that are not well understood. This study started with researching and developing a functional centrifugal electrospinning prototype test apparatus which, through patent searches, was found to be innovative in nature. Once a functional test apparatus was designed, an exploration of the process parameter settings was conducted to locate an experimental setup condition where the process was able to produce acceptable sub-micron polymeric fibers. At this point, the traditional RSM/DOE approach was used to find a setting point that produced a media efficiency value that was close to optimal. An Artificial Neural Network architecture was then developed with the goal of building a model that accurately predicts response surface values. The ANN model was then used to predict responses in place of experimentation on the prototype in the RSM/DOE optimization process. Different levels of use of the ANN were then formulated using the RSM/DOE and ANN to investigate its potential advantages in terms of time, and cost effectiveness to the overall optimization approach. The development of an innovative centrifugal electrospinning process was successful. A new electrospinning design was developed from the research. A patent application is currently pending on the centrifugal electrospinning applicator developed from this research. Near optimum operating settings for the prototype were found. Typically there is a substantial expense associated with evolving a well-designed prototype and experimentally investigating a new process. The use of ANN with RSM/DOE in the research was seen to reduce this expense while identifying settings close to those found when using RSM/DOE with experimentation alone. This research also provides insights into the effectiveness of the RSM/DOE approach in the context of prototype development and provides insights into how different combinations of RSM/DOE and ANN may be applied to complex processes.Item Open Access Resource allocation optimization in the smart grid and high-performance computing(Colorado State University. Libraries, 2015) Hansen, Timothy M., author; Siegel, Howard Jay, advisor; Maciejewski, Anthony A., advisor; Suryanarayanan, Siddharth, committee member; Bradley, Thomas H., committee memberThis dissertation examines resource allocation optimization in the areas of Smart Grid and high-performance computing (HPC). The primary focus of this work is resource allocation related to Smart Grid, particularly in the areas of aggregated demand response (DR) and demand side management (DSM). Towards that goal, a framework for heuristic optimization for DR in the Smart Grid is designed. The optimization problem, denoted Smart Grid resource allocation (SGRA), controls a large set of individual customer assets (e.g., smart appliances) to enact a beneficial change on the electric power system (e.g., peak load reduction). In one part of this dissertation, the SGRA heuristic framework uses a proposed aggregator-based approach. The aggregator is a for-profit entity that uses information about customers' smart appliances to create a schedule that maximizes its profit. To motivate the customers to participate with the aggregator, the aggregator offers a reduced rate of electricity called customer incentive pricing (CIP). A genetic algorithm is used to find a smart appliance schedule and CIP to maximize aggregator profit. By optimizing for aggregator profit, the peak load of the system is also reduced, resulting in a beneficial change for the entire system. Visualization techniques are adapted, and enhanced, to gain insight into the results of the aggregator-based optimization. A second approach to DR in the Smart Grid is taken in the form of a residential home energy management system (HEMS). The HEMS uses a non-myopic decision making technique, denoted partially-observable Markov decision process (POMDP), to make sequential decisions about energy usage within a residential household to minimize cost in a real-time pricing (RTP) environment. The POMDP HEMS significantly reduces the electricity cost for a residential customer with minimal impact on comfort. The secondary focus of the research is resource allocation for scientific applications in HPC using a dual-stage methodology. In the first stage, a batch scheduler assigns a number of homogeneous processors from a set of heterogeneous parallel machines to each application in a batch of parallel, scientific applications. The scheduler assigns machine resources to maximize the probability that all applications complete by a given time, denoted the makespan goal. This objective function is denoted robustness. The second stage uses runtime optimization in the form of dynamic loop scheduling to minimize the execution time of each application using the resources allocated in the first stage. It is shown that by combining the two optimization stages, better performance is achieved than by using either approach separately or by using neither. The specific contributions of this dissertation are: (a) heuristic frameworks and mathematical models for resource allocation in the Smart Grid and dual-stage HPC are designed, (b) CIP is introduced to allow an aggregator profit and encourage customer participation, and (c) heuristics and decision-making techniques are designed and analyzed within the two problem domains to evaluate their performance.Item Open Access Second-order sub-array Cartesian product split-plot design(Colorado State University. Libraries, 2015) Cortés-Mestres, Luis A., author; Duff, William S., advisor; Simpson, James R., advisor; Chong, Edwin K. P., committee member; Bradley, Thomas H., committee member; Jathar, Shantanu H., committee memberFisher (1926) laid down the fundamental principles of design of experiments: factorization, replication, randomization, and local control of error. In industrial experiments, however, departure from these principles is commonplace. Many industrial experiments involve situations in which complete randomization may not be feasible because the factor level settings are impractical or inconvenient to change, the resources available to complete the experiment in homogenous settings are limited, or both. Restricted randomization due to factor levels that are impractical or inconvenient to change can lead to a split-plot experiment. Restricted randomization due to resource limitation can lead to blocking. Situations that require fitting a second-order model under those conditions lead to a second-order block split-plot experiment. Although response surface methodology has experienced a phenomenal growth since Box and Wilson (1951), the departure from standard methods to tackle second-order block split-plot design remains, for the most part, unexplored. Most graduate textbooks only provide a relatively basic treatise of the subject. Peer-reviewed literature is scarce, has a limited number of examples, and provides guidelines that often are too general. This deficit of information leaves practitioners ill prepared to face the roadblocks illuminated by Simpson, Kowalski, and Landman (2004). Practical strategies to help practitioners in dealing with the challenges presented by second-order block split-plot design are provided, including an end-to-end, innovative approach for the construction of a new form of effective and efficient response surface design referred to as second-order sub-array Cartesian product split-plot design. This new form of design is an alternative to ineffective split-plot designs that are currently in use by the manufacturing and quality control community. The design is economical, the prediction variance of the regression coefficients is low and stable, and the aliasing between the terms in the model and effects that are not in the model as well as the correlation between similar effects that are not in the model is low. Based on an assessment using well-accepted key design evaluation criterion, it is demonstrated that second-order sub-array Cartesian product split-plot designs perform as well or better than historical designs that have been considered standards up to this point.Item Open Access System engineering for radio frequency communication consolidation with parabolic antenna stacking(Colorado State University. Libraries, 2020) Sugama, Clive, author; Chandrasekar, V., advisor; Jayasumana, Anura P., committee member; Bradley, Thomas H., committee member; Chavez, Jose L., committee memberThis dissertation implements System Engineering (SE) practices while utilizing Model Based System Engineering (MBSE) methods through software applications for the design and development of a parabolic stacked antenna. Parabolic antenna stacking provides communication system consolidation by having multiple antennas on a single pedestal which reduces the number of U.S. Navy shipboard topside antennas. The dissertation begins with defining early phase system lifecycle processes and the correlation of these early processes to activities performed when the system is being developed. Performing SE practices with the assistance of MBSE, Agile, Lean methodologies and SE / engineering software applications reduces the likelihood of system failure, rework, schedule delays, and cost overruns. Using this approach, antenna system consolidation via parabolic antenna stacking is investigated while applying SE principles and utilizing SE software applications. SE / engineering software such as IBM Rational Software, Innoslate, Antenna Magus, ExtendSim, and CST Microwave Studio were used to perform SE activities denoted in ISO, IEC, and IEEE standards. A method to achieve multi-band capabilities on a single antenna pedestal in order to reduce the amount of U.S. Navy topside antennas is researched. An innovative approach of parabolic antenna stacking is presented to reduce the amount of antennas that take up physical space on shipboard platforms. Process simulation is presented to provide an approach to improve predicting delay times for operational availability measures and to identify process improvements through lean methodologies. Finally, this work concludes with a summary and suggestions for future work.Item Open Access Technological advances, human performance, and the operation of nuclear facilities(Colorado State University. Libraries, 2017) Corrado, Jonathan K., author; Sega, Ronald M., advisor; Bradley, Thomas H., committee member; Chong, Edwin K. P., committee member; Young, Peter M., committee memberMany unfortunate and unintended adverse industrial incidents occur across the United States each year, and the nuclear industry is no exception. Depending on their severity, these incidents can be problematic for people, the facilities, and surrounding environments. Human error is a contributing factor in many such incidents. This dissertation first explored the hypothesis that technological changes that affect how operators interact within the systems of the nuclear facilities exacerbate the cost of incidents caused by human error. I conducted a review of nuclear incidents in the United States from 1955 through 2010 that reached Level 3 (serious incident) or higher on the International Nuclear Events Scale (INES). The cost of each incident at facilities that had recently undergone technological changes affecting plant operators' jobs was compared to the cost of events at facilities that had not undergone changes. A t-test determined a statistically significant difference between the two groups, confirming the hypothesis. Next, I conducted a follow-on study to determine the impact of the incorporation of new technologies into nuclear facilities. The data indicated that spending more money on upgrades increased the facility's capacity as well as the number of incidents reported, but the incident severity was minor. Finally, I discuss the impact of human error on plant operations and the impact of evolving technology on the 21st-century operator, proposing a methodology to overcome these challenges by applying the systems engineering process.Item Open Access The effects of a realistic hollow cathode plasma contactor model on the simulation of bare electrodynamic tether systems(Colorado State University. Libraries, 2013) Blash, Derek M., author; Williams, John D., advisor; Bradley, Thomas H., committee member; Robinson, Raymond S., committee memberThe region known as Low-Earth Orbit (LEO) has become populated with artificial satellites and space debris since humanities initial venture into the region. This has turned LEO into a hazardous region. Since LEO is very valuable to many different countries, there has been a push to prevent further buildup and talk of even deorbiting spent satellites and debris already in LEO. One of the more attractive concepts available for deorbiting debris and spent satellites is a Bare Electrodynamic Tether (BET). A BET is a propellantless propulsion technique in which two objects are joined together by a thin conducting material. When these tethered objects are placed in LEO, the tether sweeps across the magnetic field lines of the Earth and induces an electromotive force (emf) along the tether. Current from the space plasma is collected on the bare tether under the action of the induced emf, and this current interacts with the Earth's magnetic field to create a drag force that can be used to deorbit spent satellites and space debris. A Plasma Contactor (PC) is used to close the electrical circuit between the BET and the ionospheric plasma. The PC requires a voltage and, depending on the device, a gas flow to emit electrons through a plasma bridge to the ionospheric plasma. The PC also can require a plasma discharge electrode and a heater to condition the PC for operation. These parameters as well as the PC performance are required to build an accurate simulation of a PC and, therefore, a BET deorbiting system. This thesis focuses on the development, validation, and implementation of a simulation tool to model the effects of a realistic hollow cathode PC system model on a BET deorbit system.