Browsing by Author "Chong, Edwin, committee member"
Now showing 1 - 20 of 31
Results Per Page
Sort Options
Item Open Access A statistical prediction model for east Pacific and Atlantic tropical cyclone genesis(Colorado State University. Libraries, 2012) Slade, Stephanie A., author; Maloney, Eric D., advisor; Thompson, David, committee member; Chong, Edwin, committee memberA statistical model is developed via multiple logistic regression for the prediction of weekly tropical cyclone activity over the East Pacific and Atlantic Ocean regions using data from 1975 to 2009. The predictors used in the model include a climatology of tropical cyclone genesis for each ocean basin, an El Niño-Southern Oscillation (ENSO) index derived from the first principal component of sea surface temperature over the Equatorial East Pacific region, and two indices representing the propagating Madden-Julian Oscillation (MJO). These predictors are suggested as useful for the prediction of East Pacific and Atlantic cyclogenesis based on previous work in the literature and are further confirmed in this study using basic statistics. Univariate logistic regression models are generated for each predictor in each region to ensure the choice of prediction scheme. Using all predictors, cross-validated hindcasts are developed out to a seven week forecast lead. A formal stepwise predictor selection procedure is implemented to select the predictors used in each region at each forecast lead. Brier skill scores and reliability diagrams are used to assess the skill and dependability of the models. Results show a significant increase in model skill at predicting tropical cyclogenesis by the inclusion of the MJO out to a three week forecast lead for the East Pacific and a two week forecast lead for the Atlantic. The importance of ENSO for Atlantic genesis prediction is highlighted, and the uncertain effects of ENSO on East Pacific tropical cyclogenesis are re-visited using the prediction scheme. Future work to extend the prediction model with other predictors is discussed.Item Open Access Analytical cost metrics: days of future past(Colorado State University. Libraries, 2019) Prajapati, Nirmal, author; Rajopadhye, Sanjay, advisor; Böhm, Wim, committee member; Chong, Edwin, committee member; Pouchet, Louis-Noël, committee memberFuture exascale high-performance computing (HPC) systems are expected to be increasingly heterogeneous, consisting of several multi-core CPUs and a large number of accelerators, special-purpose hardware that will increase the computing power of the system in a very energy-efficient way. Specialized, energy-efficient accelerators are also an important component in many diverse systems beyond HPC: gaming machines, general purpose workstations, tablets, phones and other media devices. With Moore's law driving the evolution of hardware platforms towards exascale, the dominant performance metric (time efficiency) has now expanded to also incorporate power/energy efficiency. This work builds analytical cost models for cost metrics such as time, energy, memory access, and silicon area. These models are used to predict the performance of applications, for performance tuning, and chip design. The idea is to work with domain specific accelerators where analytical cost models can be accurately used for performance optimization. The performance optimization problems are formulated as mathematical optimization problems. This work explores the analytical cost modeling and mathematical optimization approach in a few ways. For stencil applications and GPU architectures, the analytical cost models are developed for execution time as well as energy. The models are used for performance tuning over existing architectures, and are coupled with silicon area models of GPU architectures to generate highly efficient architecture configurations. For matrix chain products, analytical closed form solutions for off-chip data movement are built and used to minimize the total data movement cost of a minimum op count tree.Item Open Access Application of semi-analytical multiphase flow models for the simulation and optimization of geological carbon sequestration(Colorado State University. Libraries, 2014) Cody, Brent M., author; Bau, Domenico, advisor; Labadie, John, committee member; Sale, Tom, committee member; Chong, Edwin, committee memberGeological carbon sequestration (GCS) has been identified as having the potential to reduce increasing atmospheric concentrations of carbon dioxide (CO2). However, a global impact will only be achieved if GCS is cost effectively and safely implemented on a massive scale. This work presents a computationally efficient methodology for identifying optimal injection strategies at candidate GCS sites having caprock permeability uncertainty. A multi-objective evolutionary algorithm is used to heuristically determine non-dominated solutions between the following two competing objectives: 1) maximize mass of CO2 sequestered and 2) minimize project cost. A semi-analytical algorithm is used to estimate CO2 leakage mass rather than a numerical model, enabling the study of GCS sites having vastly different domain characteristics. The stochastic optimization framework presented herein is applied to a case study of a brine filled aquifer in the Michigan Basin (MB). Twelve optimization test cases are performed to investigate the impact of decision maker (DM) preferences on heuristically determined Pareto-optimal objective function values and decision variable selection. Risk adversity to CO2 leakage is found to have the largest effect on optimization results, followed by degree of caprock permeability uncertainty. This analysis shows that the feasible of GCS at MB test site is highly dependent upon DM risk adversity. Also, large gains in computational efficiency achieved using parallel processing and archiving are discussed. Because the risk assessment and optimization tools used in this effort require large numbers of simulation calls, it important to choose the appropriate level of complexity when selecting the type of simulation model. An additional premise of this work is that an existing multiphase semi-analytical algorithm used to estimate key system attributes (i.e. pressure distribution, CO2 plume extent, and fluid migration) may be further improved in both accuracy and computational efficiency. Herein, three modifications to this algorithm are presented and explored including 1) solving for temporally averaged flow rates at each passive well at each time step, 2) using separate pressure response functions depending on fluid type, and 3) applying a fixed point type iterative global pressure solution to eliminate the need to solve large sets of linear equations. The first two modifications are aimed at improving accuracy while the third focuses upon computational efficiency. Results show that, while one modification may adversely impact the original algorithm, significant gains in leakage estimation accuracy and computational efficiency are obtained by implementing two of these modifications. Finally, in an effort to further enhance the GCS optimization framework, this work presents a performance comparison between a recently proposed multi-objective gravitational search algorithm (MOGSA) and the well-established fast non-dominated sorting genetic algorithm (NSGA-II). Both techniques are used to heuristically determine Pareto-optimal solutions by minimizing project cost and maximizing the mass of CO2 sequestered for nine test cases in the Michigan Basin (MB). Two performance measures are explored for each algorithm, including 1) objective solution diversity and 2) objective solution convergence rate. Faster convergence rates by the MOGSA are observed early in the majority of test optimization runs, while the NSGA-II is found to consistently provide a better search of objective function space and lower average cost per kg sequestered solutions.Item Open Access Automated sample preparation using adaptive digital microfluidics for lab-on-chip devices(Colorado State University. Libraries, 2018) Grant, Nicholas, author; Chen, Thomas W., advisor; Chong, Edwin, committee member; Geiss, Brian, committee memberThere have been many technological advances in the medical industry over the years giving doctors and researchers more information than ever before. Technology has allowed more sensitive and accurate sensors and has also driven the size of many sensor devices smaller while increasing sensitivity. However, while many aspects of technology have seen improvements, the sample preparation of biological tests has seen lagging development. The sample preparation stage is defined here as the extracting of required features from a given sample for the purpose of measurement. A simple example of this is the solid phase extraction of DNA from a blood sample to detect blood borne pathogens. While this process is common in laboratories, and has even been automated by large and expensive equipment, it is a difficult process to mimic in lab-on-chip (LoC) devices. Nucleic Acid isolation requires common bench top equipment such as pipettes, vortexers, and centrifuges. Current lab based methods also use relatively large amounts of reagents to perform the extraction adding to the cost of each test. There has been a lot of research improving sensing techniques proposed for Lab on Chip devices, but many sensing methods still require a sample preparation stage to extract desired features. Without a complimentary LoC sample preparation system, the diversity of LoC device remains limited. The results presented in this thesis demonstrate the general principle of digital microfluidic device and the use of such device in a small hand-held platform capable of performing many sample preparation tasks automatically, such as the extraction and isolation of DNA. Liquids are transported using a technique called Eletro-wetting on Dielectric (EWOD) and controlled via a programmable microprocessor. The programmable nature of the device allows it to be configured for a variety of tests for different industries. The device also requires a fraction of the liquids lab based methods use, which greatly reduces the cost per test. The results of this thesis show a promising step forward to more capable LoC devices.Item Open Access Autonomous trucks as a scalable system of systems: development, constituent systems communication protocols and cybersecurity(Colorado State University. Libraries, 2024) Elhadeedy, Ahmed, author; Daily, Jeremy, advisor; Chong, Edwin, committee member; Papadopoulos, Christos, committee member; Luo, Jie, committee memberDriverless vehicles are complex to develop due to the number of systems required for safe and secure autonomous operation. Autonomous vehicles embody the definition of a system of systems as they incorporate several systems to enable functions like perception, decision-making, vehicle controls, and external communication. Constituent systems are often developed by different vendors globally which introduces challenges during the development process. Additionally, as the fleet of autonomous vehicles scales, optimization of onboard and off-board communication between the constituent systems becomes critical. Autonomous truck and trailer configurations face challenges when operating in reverse due to the lack of sensing on the trailer. It is anticipated that sensor packages will be installed on existing trailers to extend autonomous operations while operating in reverse in uncontrolled environments, like a customer's loading dock. Power Line Communication (PLC) between the trailer and the tractor cannot support high bandwidth and low latency communication. Legacy communications use powerline carrier communications at 9600 baud, so upfitting existing trailers for autonomous operations will require adopting technologies like Ethernet or a wireless harness between the truck and the trailer. This would require additional security measures and architecture, especially when pairing a tractor with a trailer. We proposed tailoring the system of systems Model for autonomous vehicles. The model serves as the governing framework for the development of constituent systems. It's essential for the SoS model to accommodate various development approaches that are used for hardware, and software such as Agile, or Vee models. Additionally, a queuing model for certificates authentication compares the named certificate approach with the traditional approach. The model shows the potential benefits of named certificates when the autonomous vehicles are scaled. We also proposed using named J1939 signals to reduce complexities and integration efforts when multiple on-board or off-board systems request vehicle signals. We discuss the current challenges and threats on autonomous truck-trailer communication when Ethernet or a wireless harness is used, and the impact on the Electronic Control Unit (ECU) lifecycle. In addition to using Named Data Networking (NDN) to secure in-vehicle and cloud communication. Named Data Networking can reduce the complexity of the security of the in-vehicle communication networks where it provides a networking solution with security by design.Item Open Access Beyond shared memory loop parallelism in the polyhedral model(Colorado State University. Libraries, 2013) Yuki, Tomofumi, author; Rajopadhye, Sanjay, advisor; Böhm, Wim, committee member; Strout, Michelle M., committee member; Chong, Edwin, committee memberWith the introduction of multi-core processors, motivated by power and energy concerns, parallel processing has become main-stream. Parallel programming is much more difficult due to its non-deterministic nature, and because of parallel programming bugs that arise from non-determinacy. One solution is automatic parallelization, where it is entirely up to the compiler to efficiently parallelize sequential programs. However, automatic parallelization is very difficult, and only a handful of successful techniques are available, even after decades of research. Automatic parallelization for distributed memory architectures is even more problematic in that it requires explicit handling of data partitioning and communication. Since data must be partitioned among multiple nodes that do not share memory, the original memory allocation of sequential programs cannot be directly used. One of the main contributions of this dissertation is the development of techniques for generating distributed memory parallel code with parametric tiling. Our approach builds on important contributions to the polyhedral model, a mathematical framework for reasoning about program transformations. We show that many affine control programs can be uniformized only with simple techniques. Being able to assume uniform dependences significantly simplifies distributed memory code generation, and also enables parametric tiling. Our approach implemented in the AlphaZ system, a system for prototyping analyses, transformations, and code generators in the polyhedral model. The key features of AlphaZ are memory re-allocation, and explicit representation of reductions. We evaluate our approach on a collection of polyhedral kernels from the PolyBench suite, and show that our approach scales as well as PLuTo, a state-of-the-art shared memory automatic parallelizer using the polyhedral model. Automatic parallelization is only one approach to dealing with the non-deterministic nature of parallel programming that leaves the difficulty entirely to the compiler. Another approach is to develop novel parallel programming languages. These languages, such as X10, aim to provide highly productive parallel programming environment by including parallelism into the language design. However, even in these languages, parallel bugs remain to be an important issue that hinders programmer productivity. Another contribution of this dissertation is to extend the array dataflow analysis to handle a subset of X10 programs. We apply the result of dataflow analysis to statically guarantee determinism. Providing static guarantees can significantly increase programmer productivity by catching questionable implementations at compile-time, or even while programming.Item Open Access Causal inference using observational data - case studies in climate science(Colorado State University. Libraries, 2020) Samarasinghe, Savini M., author; Ebert-Uphoff, Imme, advisor; Anderson, Chuck, committee member; Chong, Edwin, committee member; Kirby, Michael, committee memberWe are in an era where atmospheric science is data-rich in both observations (e.g., satellite/ sensor data) and model output. Our goal with causal discovery is to apply suitable data science approaches to climate data to make inferences about the cause-effect relationships between climate variables. In this research, we focus on using observational studies, an approach that does not rely on controlled experiments, to infer cause-effect. Due to reasons such as latent variables, these observational studies do not allow us to prove causal relationships. Nevertheless, they provide data-driven hypotheses of the interactions, which can enable us to get insights into the salient interactions as well as the timescales at which they occur. Even though there are many different causal inference frameworks and methods that rely on observational studies, these approaches have not found widespread use within the climate or Earth science communities. To date, the most commonly used observational approaches include lagged correlation/regression analysis, as well as the bivariate Granger causality approach. We can attribute this lack of popularity to two main reasons. First is the inherent difficulty of inferring cause-effect in climate. Complex processes in the climate interact with each other at varying time spans. These interactions can be nonlinear, the distributions of relevant climate variables can be non-Gaussian, and the processes can be chaotic. A researcher interested in these causal inference problems has to face many challenges varying from identifying suitable variables, data, preprocessing and inference methods, as well as setting up the inference problem in a physically meaningful way. Also, the limited exposure and accessibility to modern causal inference approaches is another reason for their limited use within the climate science community. In this dissertation, we present three case studies related to causal inference in climate science, namely, (1) causal relationships between the Arctic temperature and mid-latitude circulations, (2) relationships between the Madden Julian Oscillation (MJO) and the North Atlantic Oscillation (NAO) and (3) the causal relationships between atmospheric disturbances of different spatial scales (e.g., Planetary vs. Synoptic). We use methods based on probabilistic graphical models to infer cause-effect, specifically constraint-based structure learning methods, and graphical Granger methods. For each case study, we analyze and document the scientific thought process of setting up the problem, the challenges faced, and how we have dealt with the challenges. The challenges discussed include, but not limited to, method selection, variable representation, and data preparation. We also present a successful high-dimensional study of causal discovery in spectral space. The main objectives of this research are to make causal inference methods more accessible to a researcher/climate scientist who is at entry-level to spatiotemporal causality and to promote more modern causal inference methods to the climate science community. The case studies, covering a wide range of questions and challenges, are meant to act as a resourceful starting point to a researcher interested in tackling more general causal inference problems in climate.Item Open Access Characterizing the self-motion manifolds of redundant robots of arbitrary kinematic structures(Colorado State University. Libraries, 2022) Almarkhi, Ahmad A., author; Maciejewski, Anthony A., advisor; Chong, Edwin, committee member; Oprea, Iuliana, committee member; Zhao, Jianguo, committee memberRobot fault tolerance measures can be classified into two categories: 1) Local measures that are based on the singular value decomposition (SVD) of the robot Jacobian, and 2) Global measures that are suitable to quantify the fault tolerance more effectively in pick-and-place applications. One can use the size of the self-motion manifold of a robot as a global fault-tolerance measure. The size of the self-motion manifold at a certain end-effector location can be simply the sum of the range of the joint angles of a robot at that location. This work employs the fact that the largest self-motion manifolds occur due to merging two (or more) previously disjoint manifolds. The connection of previously disjoint manifolds occur in special configurations in the joint space called singularities. Singularities (singular configurations) occur when two or more of the robot joint axes become aligned and are linearly dependent. A significant amount of research has been performed on identifying the robot singularities but was all based on symbolically solving for when the robot Jacobian is not of full rank. In this work, an algorithm was proposed that is based on the gradient of the singular values of the robot Jacobian. This algorithm is not limited to any Degree of Freedom (DoF) nor any specific robot kinematic structure and any rank of singularity. Based on the robot singularities, one can search for the largest self-motion manifold near robot singularities. The measure of the size of the self-motion manifold was chosen to eliminate the effect of the self-motion manifold's topology and dimension. Because the SVD at singularities is indistinct, one can employ Givens rotations to define the physically meaningful singular directions, i.e., the directions where the robot is not able to move. This approach has been extensively implemented on a 4-DoF robot, different 7-DoF robot, and an 8-DoF robot. The global fault-tolerance measure might be further optimized by changing the kinematic structure of a robot. This may allow one to determine a globally fault-tolerant robot, i.e., a robot with 2π range for all of its joint angles at certain end-effector location, i.e., a location that is the most suitable for pick-and-place tasks.Item Open Access Comparing sets of data sets on the Grassmann and flag manifolds with applications to data analysis in high and low dimensions(Colorado State University. Libraries, 2020) Ma, Xiaofeng, author; Kirby, Michael, advisor; Peterson, Chris, advisor; Chong, Edwin, committee member; Scharf, Louis, committee member; Shonkwiler, Clayton, committee memberThis dissertation develops numerical algorithms for comparing sets of data sets utilizing shape and orientation of data clouds. Two key components for "comparing" are the distance measure between data sets and correspondingly the geodesic path in between. Both components will play a core role which connects two parts of this dissertation, namely data analysis on the Grassmann manifold and flag manifold. For the first part, we build on the well known geometric framework for analyzing and optimizing over data on the Grassmann manifold. To be specific, we extend the classical self-organizing mappings to the Grassamann manifold to visualize sets of high dimensional data sets in 2D space. We also propose an optimization problem on the Grassmannian to recover missing data. In the second part, we extend the geometric framework to the flag manifold to encode the variability of nested subspaces. There we propose a numerical algorithm for computing a geodesic path and distance between nested subspaces. We also prove theorems to show how to reduce the dimension of the algorithm for practical computations. The approach is shown to have advantages for analyzing data when the number of data points is larger than the number of features.Item Open Access Control system design for plasma power generator(Colorado State University. Libraries, 2022) Sankaran, Aishwarya, author; Young, Peter M., advisor; Chong, Edwin, committee member; Anderson, Charles, committee memberThe purpose of this research is to develop advanced control strategies for precise control over power delivery to nonlinear plasma loads at high frequency. A high-fidelity MATLAB/Simulink simulation model was provided by Advanced Energy Industries, Inc (AE) and the data from this model was considered as the actual model under consideration. The research work requires computing a mathematical model of the plasma power generator system, analyzing and synthesizing robust controllers for individual operating points, and then developing a control system that covers the entire the grid of operating points. The modeling process involves developing computationally simple near-linear models representing relevant frequencies and operating points for the system consisting of nonlinear plasma load, RF Power Amplifier, and a Match Network. To characterize the (steady-state) mapping from power setpoint to delivered power the steady-state gains of the system are taken under consideration. Linear and nonlinear system identification procedures are used to adequately capture both the nonlinear steady-state gains and the linear dynamic model response. These near-linear or linear models with uncertainty description to characterize the robustness requirements are utilized in the second stage to develop a grid of robust controller designed at linear operating points. The controller from -synthesis design process optimizes robust performance for allowable perturbations as large as possible. It does all this while guaranteeing closed-loop stability for all allowable perturbations. The final stage of the research focuses on developing Linear Parameter Varying (LPV) controllers with non-linear offset. This single controller covers the entire operating range, including the case that the desired signals to track may vary over wide regions of the operating envelope. LPV controllers allows actual power to track the changing setpoint in a smooth manner over the entire operating range.Item Open Access Davidson and the idiolectic view(Colorado State University. Libraries, 2013) Gumm, Derek, author; Losonsky, Michael, advisor; Kneller, Jane, committee member; Chong, Edwin, committee memberIn this thesis, I defend and expand Donald Davidson's view of language and linguistic meaning. I begin by looking at two positions that appreciate the sociality of language and linguistic meaning in two different ways. One view, as exemplified by Michael Dummett, sees the meaning of words as a feature of a language that holds independently of any particular speaker, while the other view, as exemplified by Davidson, sees meaning as depending on particular speakers and interpreters, their intentions, and their interactions. I find a serious tension in the former view and side with the latter, which I dub the idiolectic view of language. In the second chapter, I analyze Davidson's claim that understanding gives life to meaning. Using this analysis as a jumping off point, I outline the primary features of the Davidsonian idiolectic program. Finally, I conclude that the idiolectic features of this position place a special emphasis on the moment at which two people's personal understanding of language overlap and that such an emphasis is best understood in terms of events as particulars. In the third and final chapter, I argue that an ontology that countenances events as particulars is required for the idiolectic view of interpretation to get off the ground. First, I outline some of Davidson's classic arguments in favor of an ontology of events for action sentences and expand them to the case of what I call second-order language sentences, sentences about communication. Next, I discuss the importance of a criterion of event identity and individuation, working from some of Davidson's own arguments. I then extend Davidson's analysis of action sentences to second-order language sentences in order to determine the essential features of the linguistic event-type. Finally, I conclude that some basic notion of a language is required by this idiolectic view despite what Davidson originally thought. However, it is not the notion of a shared language that Dummett originally had in mind.Item Open Access Enabling predictive energy management in vehicles(Colorado State University. Libraries, 2018) Asher, Zachary D., author; Bradley, Thomas H., advisor; Chong, Edwin, committee member; Young, Peter, committee member; Zhao, Jianguo, committee memberWidespread automobile usage provides economic and societal benefits but combustion engine powered automobiles have significant economic, environmental, and human health costs. Recent research has shown that these costs can be reduced by increasing fuel economy through optimal energy management. A globally optimal energy management strategy requires perfect prediction of an entire drive cycle but can improve fuel economy by up to 30\%. This dissertation focuses on bridging the gap between this important research finding and implementation of predictive energy management in modern vehicles. A primary research focus is to investigate the tradeoffs between information sensing, computation power requirements for prediction, and prediction effort when implementing predictive energy management in vehicles. These tradeoffs are specifically addressed by first exploring the resulting fuel economy from different types of prediction errors, then investigating the level of prediction fidelity, scope, and real-time computation that is required to realize a fuel economy improvement, and lastly investigating a large computational effort scenario using only modern technology to make predictions. All of these studies are implemented in simulation using high fidelity and physically validated vehicle models. Results show that fuel economy improvements using predictive optimal energy management are feasible despite prediction errors, in a low computational cost scenario, and with only modern technology to make predictions. It is anticipated that these research findings can inform new control strategies to improve vehicle fuel economy and alleviate the economic, environmental, and human health costs for the modern vehicle fleet.Item Open Access Enhancing the test and evaluation process: implementing agile development, test automation, and model-based systems engineering concepts(Colorado State University. Libraries, 2020) Walker, Joshua T., author; Borky, John, advisor; Bradley, Thomas, advisor; Chong, Edwin, committee member; Ghosh, Sudipto, committee member; Jayasumana, Anura, committee memberWith the growing complexity of modern systems, traditional testing methods are falling short. Test documentation suites used to verify the software for these types of large, complex systems can become bloated and unclear, leading to extremely long execution times and confusing, unmanageable test procedures. Additionally, the complexity of these systems can prevent the rapid understanding of complicated system concepts and behaviors, which is a necessary part of keeping up with the demands of modern testing efforts. Opportunities for optimization and innovation exist within the Test and Evaluation (T&E) domain, evidenced by the emergence of automated testing frameworks and iterative testing methodologies. Further opportunities lie with the directed expansion and application of related concepts such as Model-Based Systems Engineering (MBSE). This dissertation documents the development and implementation of three methods of enhancing the T&E field when applied to a real-world project. First, the development methodology of the system was transitioned from Waterfall to Agile, providing a more responsive approach when creating new features. Second, the Test Automation Framework (TAF) was developed, enabling the automatic execution of test procedures. Third, a method of test documentation using the Systems Modeling Language (SysML) was created, adopting concepts from MBSE to standardize the planning and analysis of test procedures. This dissertation provides the results of applying the three concepts to the development process of an airborne Electronic Warfare Management System (EWMS), which interfaces with onboard and offboard aircraft systems to receive and process the threat environment, providing the pilot or crew with a response solution for the protection of the aircraft. This system is representative of a traditional, long-term aerospace project that has been constantly upgraded over its lifetime. Over a two-year period, this new process produced a number of qualitative and quantitative results, including improving the quality and organization of the test documentation suite, reducing the minimum time to execute the test procedures, enabling the earlier identification of defects, and increasing the overall quality of the system under test. The application of these concepts generated many lessons learned, which are also provided. Transitioning a project's development methodology, modernizing the test approach, and introducing a new system of test documentation may provide significant benefits to the development of a system, but these types of process changes must be weighed against the needs of the project. This dissertation provides details of the effort to improve the effectiveness of the T&E process on an example project, as a framework for possible implementation on similar systems.Item Open Access From Pyrrhonism to Madhyamaka: paradoxical solutions to skeptical problems(Colorado State University. Libraries, 2018) Williams, Stephen G., author; Archie, Andre, advisor; MacKenzie, Mattthew, committee member; Chong, Edwin, committee memberSkepticism as a philosophical school of thought is best embodied by Greek Pyrrhonism and Indian Madhyamaka. Between these two schools, however, Pyrrhonism is bogged down on issues that Madhyamaka is not. For Greek Pyrrhonism, scholarship revolves around the issue that skeptics cannot have beliefs, and yet this is something they believe. For Indian Madhyamaka, scholarship points towards a skeptical position that is consistently paradoxical. This paper will first explore the discussion on Sextus Empiricus' Pyrrhonism as established by Michael Frede, Myles Burnyeat, and Jonathan Barnes. From there, a closer look at Aristotle, Anselm, and Immanuel Kant will show that paradoxes are more common in philosophy than normally acknowledged. An in-depth discussion of Nāgārjuna and Śāntideva's Madhyamaka skepticism using interpretations from Jay Garfield and Graham Priest will illustrate how paradoxes at the limits of thought can correctly capture skepticism. Using the understanding of Madhyamaka, the debate on Pyrrhonism and beliefs will be shown to be correctly paradoxical. Finally, the paper will conclude that skepticism itself not only paradoxical, but an impressive and valuable philosophical position.Item Open Access Fuel tank inerting systems for civil aircraft(Colorado State University. Libraries, 2014) Smith, David E., author; Sega, Ron, advisor; Young, Peter, committee member; Chong, Edwin, committee member; France, Robert, committee memberThis thesis examines and compares a variety of methods for inerting the fuel tanks of civil transport aircraft. These aircraft can range from the 50-seat Bombardier CRJ-200 to the 525-850 seat Superjumbo Airbus A380 and can also include airliner-based VIP aircraft such as the Boeing Business Jet (BBJ) or executive-class aircraft such as the Learjet 85. Three system approaches to fuel tank inerting are presented in this paper with the intent of providing senior systems engineers and project managers a comparative requirements analysis and a thorough analysis of the different levels of documentation effort required for each rather than performing a simple technical trade-off study to determine which system architecture is the lowest weight or perhaps has the least parts count. When choosing a system architecture, requirements analysis is often overlooked and documentation workload is brushed aside in favor of purely technical analyses. This thesis paper aims to provide examples of why the non-technical analyses are also important in good systems engineering.Item Open Access Highly scalable algorithms for scheduling tasks and provisioning machines on heterogeneous computing systems(Colorado State University. Libraries, 2015) Tarplee, Kyle M., author; Maciejewski, Anthony A., advisor; Siegel, Howard Jay, committee member; Chong, Edwin, committee member; Bates, Dan, committee memberAs high performance computing systems increase in size, new and more efficient algorithms are needed to schedule work on the machines, understand the performance trade-offs inherent in the system, and determine which machines to provision. The extreme scale of these newer systems requires unique task scheduling algorithms that are capable of handling millions of tasks and thousands of machines. A highly scalable scheduling algorithm is developed that computes high quality schedules, especially for large problem sizes. Large-scale computing systems also consume vast amounts of electricity, leading to high operating costs. Through the use of novel resource allocation techniques, system administrators can examine this trade-off space to quantify how much a given performance level will cost in electricity, or see what kind of performance can be expected when given an energy budget. Trading-off energy and makespan is often difficult for companies because it is unclear how each affects the profit. A monetary-based model of high performance computing is presented and a highly scalable algorithm is developed to quickly find the schedule that maximizes the profit per unit time. As more high performance computing needs are being met with cloud computing, algorithms are needed to determine the types of machines that are best suited to a particular workload. An algorithm is designed to find the best set of computing resources to allocate to the workload that takes into account the uncertainty in the task arrival rates, task execution times, and power consumption. Reward rate, cost, failure rate, and power consumption can be optimized, as desired, to optimally trade-off these conflicting objectives.Item Open Access Integrated optimization of composite structures(Colorado State University. Libraries, 2022) Lang, Daniel, author; Radford, Donald, advisor; Herber, Daniel, committee member; Chong, Edwin, committee member; Heyliger, Paul, committee memberMany industries are exploring the application of composite materials to structural designs to reduce weight. A common issue that is encountered by these industries, however, is difficulty in developing structural geometries best suited for the materials. Research efforts have begun to develop optimization methodology to help develop structural shapes but have thus far only partially addressed optimization of the geometry. This dissertation provides a literature review of past efforts to develop optimization methodologies. Through that review it is identified that the subprocesses required to fully optimize a composite structure are mold shape optimization, ply draping analysis, kinematic partitioning, connection and joint definition, ply topology optimization and manufacturing simulations. To date, however, these subprocesses have primarily been applied individually and have not been integrated to develop fully optimized designs. In this research, a methodology is proposed to integrate established composite design and subprocesses to develop optimized composite structures. The proposed methodology sequentially and iteratively improves the design through mold shape optimization, ply draping analysis, kinematic partitioning, connection and joint definition, ply topology optimization and manufacturing simulations. Throughout the proposed methodology, checks are also integrated to ensure that the developed design meets design objectives and constraints. To test the methodology a case study is conducted to develop composite rail vehicle structures. As part of this case study, it is hypothesized that a composite structure designed through a fully integrated methodology will demonstrate reduced costs, mass and improved manufacturability compared to a structure where functions have only been partially integrated. When the proposed fully integrated methodology is applied to create a case study design, the hypothesis is validated. The design generated by the fully integrated optimization methodology has a 37% lower mass and a 56% lower cost to manufacture than a design that is developed through a partially integrated methodology. The case study also demonstrates that structures developed through the proposed methodology have improved manufacturability.Item Open Access Modernizing automation in industrial control/cyber physical systems through the system engineering lifecycle(Colorado State University. Libraries, 2021) Ault, Trevor J., author; Bradley, Thomas, advisor; Golicic, Susan, committee member; Windom, Bret, committee member; Chong, Edwin, committee memberThe systems engineering process seeks to develop systems beginning from a need and ending with an operational system. The systems engineering framework is acknowledged as an effective tool for building complex systems, but this research seeks an expansion in scope and emphasis to include more detailed methods for managing, operating, and upgrading existing subsystems when they are challenged by obsolescence, functional degradation, and upgrades/commissioning. System development from a blank slate is often the default for the systems engineering field, but often an individual subsystem (in this case studied here, the automation system) must undergo upgrades much sooner than the rest of the system because it can no longer meet its functional requirements due to obsolescence. Partial system upgrades can be difficult to conceive and execute for a complex industrial system, but the fundamentals of the system engineering process can be adapted to meet the requirements for maintenance of an industrial control/cyber physical system in practice. Cyber physical systems are defined as systems that are enabled by interactions between computers and physical systems. Computers and other automation components that control the physical processes are considered part of this system. This dissertation seeks to engineer industrial automation systems to enable identification of obsolescence in cyber physical systems, simulation testing of the automation subsystems before/during upgrade, and integrity testing of alarms and automation after completion. By integrating some key aspects of the systems engineering approach into operations and maintenance activities for large-scale industrial cyber physical systems, this research develops and applies 1) novel risk-based approaches for managing obsolescence, 2) novel techniques for simulation of automation controls for fast commissioning in the field, and 3) an automatic alarm configuration engineering and management tool. These systems engineering developments are applied over the course of 5 years of continuous operation and 14 large upgrades to automation systems in the process industry (gas processing, chemical, power generation). The results of this application illustrate consistent improvement in the management, upgrading, and engineering of industrial automation systems. Metrics of system performance used to quantify the value of the proposed methodological innovations include commonly used metrics such as number of alarms, cost, and schedule improvement. For the research contribution which develops novel obsolescence identification and replacement strategies, the results show that using a modified risk management approach for automation and cyber physical systems that can quickly identify components that required upgrade. The results indicate a reduction of roughly 70% of reactive replacements due to obsolescence after the major upgrade and a 24% reduction in unplanned downtime due to part failure during normal operations. For the research contribution illustrating that automation system simulation can confirm that the upgraded subsystems meet functional requirements during upgrade on continuously running sites, results are similarly positive. A new metric is developed to normalize the cost of simulations per system which measures the amount of simulation inputs (I/O) divided by cost. Results show that using the proposed simulation tools can reduce the cost of simulation by 40% on a normalized basis and reduce alarms for a system by 55% during system startup and early operations. Lastly, an audit system was developed for the automation systems to ensure that the subsystem continued to meet functional requirements after the upgrade. Deploying the audit system for alarm configuration was successful in that it resulted in no unauthorized alarm changes after the subsystem upgrade. It also resulted in improved alarm performance at sites since causes of alarm deterioration were eliminated. Results show that these added controls resulted in 52% fewer alarms (post implementation) and the elimination of alarm flooding (periods where more than 10 alarms occur in under 10 minutes). The goal of this dissertation is to document innovative means to develop systems engineering towards operational and maintenance upgrades for industrial automation systems and to provide examples of ways this process can be applied. The values of the proposed engineering methods were validated through its application to over a dozen industrial sites of varying processes and complexity. While this research focused on heavy process industries, the process for identifying obsolete components and making major subsystems upgrades can also be applied to a broad set of industries and systems and provide research contributions to both the fields of industrial automation and system engineering.Item Open Access Moral error theory(Colorado State University. Libraries, 2016) Gustafson, Matt, author; Tropman, Elizabeth, advisor; Losonsky, Michael, committee member; Chong, Edwin, committee memberJ.L. Mackie historically has been considered the primary defender of moral error theory. The position he defends is one of many metaethical positions an individual might hold. Moral error theory’s central thesis is that all moral claims are false or neither true nor false because of moral discourse’s commitment to some problematic thesis. Moral error theory has not always been taken seriously however. Many have responded to Mackie’s moral error theory, but they often do so in a cursory manner. Moral error theory would seem to be a historical curiosity, but not a position often adopted. In modern presentations and critiques of moral error theory the discussion often seems to be one-sided. The error theorist does not always consider the weaknesses of what he considers the best presentation of his position, and the critic does not always fully appreciate the appeal of, or fully engage with the strongest presentations of moral error theory. Often error theorists and critics of moral error theory recognize that moral error theory could be developed in a variety of manners, but limit their discussions to moral error theories which closely relate to Mackie’s original presentation of moral error theory. By developing an understanding of Mackie’s original position and new variations on his position we can see what motivates individuals to develop error theories related in some manner to Mackie’s error theory. We can also see the limits of moral error theories which build off Mackie’s error theory however. In particular, I will examine the moral error theory of Jonas Olson. Olson identifies moral discourse’s commitment to irreducible normativity as especially problematic. Identifying the limits and difficulties which plague error theories such as Olson’s should lead us to consider other manners in which one can develop moral error theories. In the end, I propose that one might be able to establish something like a moral error theory by arguing that moral beliefs are unjustified. Moral beliefs, it will be argued, are unjustified because they ultimately issue from an evolutionary source which is unreliable. Because those beliefs are unjustified, I claim that we are in error if we continue to hold those beliefs. While such a position has often been called moral skepticism, I argue that it can be seen as a sort of moral error theory.Item Open Access Neural networks for modeling and control of particle accelerators(Colorado State University. Libraries, 2020) Edelen, Auralee Linscott, author; Biedron, Sandra, advisor; Milton, Stephen, advisor; Chong, Edwin, committee member; Johnson, Thomas, committee memberCharged particle accelerators support a wide variety of scientific, industrial, and medical applications. They range in scale and complexity from systems with just a few components for beam acceleration and manipulation, to large scientific user facilities that span many kilometers and have hundreds-to-thousands of individually-controllable components. Specific operational requirements must be met by adjusting the many controllable variables of the accelerator. Meeting these requirements can be challenging, both in terms of the ability to achieve specific beam quality metrics in a reliable fashion and in terms of the time needed to set up and maintain the optimal operating conditions. One avenue toward addressing this challenge is to incorporate techniques from the fields of machine learning (ML) and artificial intelligence (AI) into the way particle accelerators are modeled and controlled. While many promising approaches within AI/ML could be used for particle accelerators, this dissertation focuses on approaches based on neural networks. Neural networks are particularly well-suited to modeling, control, and diagnostic analysis of nonlinear systems, as well as systems with large parameter spaces. They are also very appealing for their ability to process high-dimensional data types, such as images and time series (both of which are ubiquitous in particle accelerators). In this work, key studies that demonstrated the potential utility of modern neural network-based approaches to modeling and control of particle accelerators are presented. The context for this work is important: at the start of this work in 2012, there was little interest in AI/ML in the particle accelerator community, and many of the advances in neural networks and deep learning that enabled its present success had not yet been made at that time. As such, this work was both an exploration of possible application areas and a generator of proof-of-concept demonstrations in these areas.