Browsing by Author "Chong, Edwin K. P., committee member"
Now showing 1 - 20 of 26
Results Per Page
Sort Options
Item Open Access A graph-based, systems approach for detecting violent extremist radicalization trajectories and other latent behaviors(Colorado State University. Libraries, 2017) Hung, Benjamin W. K., author; Jayasumana, Anura P., advisor; Chong, Edwin K. P., committee member; Ray, Indrajit, committee member; Sega, Ronald M., committee memberThe number and lethality of violent extremist plots motivated by the Salafi-jihadist ideology have been growing for nearly the last decade in both the U.S and Western Europe. While detecting the radicalization of violent extremists is a key component in preventing future terrorist attacks, it remains a significant challenge to law enforcement due to the issues of both scale and dynamics. Recent terrorist attack successes highlight the real possibility of missed signals from, or continued radicalization by, individuals whom the authorities had formerly investigated and even interviewed. Additionally, beyond considering just the behavioral dynamics of a person of interest is the need for investigators to consider the behaviors and activities of social ties vis-à -vis the person of interest. We undertake a fundamentally systems approach in addressing these challenges by investigating the need and feasibility of a radicalization detection system, a risk assessment assistance technology for law enforcement and intelligence agencies. The proposed system first mines public data and government databases for individuals who exhibit risk indicators for extremist violence, and then enables law enforcement to monitor those individuals at the scope and scale that is lawful, and account for the dynamic indicative behaviors of the individuals and their associates rigorously and automatically. In this thesis, we first identify the operational deficiencies of current law enforcement and intelligence agency efforts, investigate the environmental conditions and stakeholders most salient to the development and operation of the proposed system, and address both programmatic and technical risks with several initial mitigating strategies. We codify this large effort into a radicalization detection system framework. The main thrust of this effort is the investigation of the technological opportunities for the identification of individuals matching a radicalization pattern of behaviors in the proposed radicalization detection system. We frame our technical approach as a unique dynamic graph pattern matching problem, and develop a technology called INSiGHT (Investigative Search for Graph Trajectories) to help identify individuals or small groups with conforming subgraphs to a radicalization query pattern, and follow the match trajectories over time. INSiGHT is aimed at assisting law enforcement and intelligence agencies in monitoring and screening for those individuals whose behaviors indicate a significant risk for violence, and allow for the better prioritization of limited investigative resources. We demonstrated the performance of INSiGHT on a variety of datasets, to include small synthetic radicalization-specific data sets, a real behavioral dataset of time-stamped radicalization indicators of recent U.S. violent extremists, and a large, real-world BlogCatalog dataset serving as a proxy for the type of intelligence or law enforcement data networks that could be utilized to track the radicalization of violent extremists. We also extended INSiGHT by developing a non-combinatorial neighbor matching technique to enable analysts to maintain visibility of potential collective threats and conspiracies and account for the role close social ties have in an individual's radicalization. This enhancement was validated on small, synthetic radicalization-specific datasets as well as the large BlogCatalog dataset with real social network connections and tagging behaviors for over 80K accounts. The results showed that our algorithm returned whole and partial subgraph matches that enabled analysts to gain and maintain visibility on neighbors' activities. Overall, INSiGHT led to consistent, informed, and reliable assessments about those who pose a significant risk for some latent behavior in a variety of settings. Based upon these results, we maintain that INSiGHT is a feasible and useful supporting technology with the potential to optimize law enforcement investigative efforts and ultimately enable the prevention of individuals from carrying out extremist violence. Although the prime motivation of this research is the detection of violent extremist radicalization, we found that INSiGHT is applicable in detecting latent behaviors in other domains such as on-line student assessment and consumer analytics. This utility was demonstrated through experiments with real data. For on-line student assessment, we tested INSiGHT on a MOOC dataset of students and time-stamped on-line course activities to predict those students who persisted in the course. For consumer analytics, we tested the performance on a real, large proprietary consumer activities dataset from a home improvement retailer. Lastly, motivated by the desire to validate INSiGHT as a screening technology when ground truth is known, we developed a synthetic data generator of large population, time-stamped, individual-level consumer activities data consistent with an a priori project set designation (latent behavior). This contribution also sets the stage for future work in developing an analogous synthetic data generator for radicalization indicators to serve as a testbed for INSiGHT and other data mining algorithms.Item Open Access A tabu search evolutionary algorithm for multiobjective optimization: application to a bi-criterion aircraft structural reliability problem(Colorado State University. Libraries, 2015) Long, Kim Chenming, author; Duff, William S., advisor; Labadie, John W., advisor; Stansloski, Mitchell, committee member; Chong, Edwin K. P., committee member; Sampath, Walajabad S., committee memberReal-world engineering optimization problems often require the consideration of multiple conflicting and noncommensurate objectives, subject to nonconvex constraint regions in a high-dimensional decision space. Further challenges occur for combinatorial multiobjective problems in which the decision variables are not continuous. Traditional multiobjective optimization methods of operations research, such as weighting and epsilon constraint methods, are ill-suited to solving these complex, multiobjective problems. This has given rise to the application of a wide range of metaheuristic optimization algorithms, such as evolutionary, particle swarm, simulated annealing, and ant colony methods, to multiobjective optimization. Several multiobjective evolutionary algorithms have been developed, including the strength Pareto evolutionary algorithm (SPEA) and the non-dominated sorting genetic algorithm (NSGA), for determining the Pareto-optimal set of non-dominated solutions. Although numerous researchers have developed a wide range of multiobjective optimization algorithms, there is a continuing need to construct computationally efficient algorithms with an improved ability to converge to globally non-dominated solutions along the Pareto-optimal front for complex, large-scale, multiobjective engineering optimization problems. This is particularly important when the multiple objective functions and constraints of the real-world system cannot be expressed in explicit mathematical representations. This research presents a novel metaheuristic evolutionary algorithm for complex multiobjective optimization problems, which combines the metaheuristic tabu search algorithm with the evolutionary algorithm (TSEA), as embodied in genetic algorithms. TSEA is successfully applied to bicriteria (i.e., structural reliability and retrofit cost) optimization of the aircraft tail structure fatigue life, which increases its reliability by prolonging fatigue life. A comparison for this application of the proposed algorithm, TSEA, with several state-of-the-art multiobjective optimization algorithms reveals that TSEA outperforms these algorithms by providing retrofit solutions with greater reliability for the same costs (i.e., closer to the Pareto-optimal front) after the algorithms are executed for the same number of generations. This research also demonstrates that TSEA competes with and, in some situations, outperforms state-of-the-art multiobjective optimization algorithms such as NSGA II and SPEA 2 when applied to classic bicriteria test problems in the technical literature and other complex, sizable real-world applications. The successful implementation of TSEA contributes to the safety of aeronautical structures by providing a systematic way to guide aircraft structural retrofitting efforts, as well as a potentially useful algorithm for a wide range of multiobjective optimization problems in engineering and other fields.Item Open Access Acoustic tomography of the atmosphere using iterated unscented Kalman filter(Colorado State University. Libraries, 2012) Kolouri, Soheil, author; Azimi-Sadjadi, Mahmood R., advisor; Chong, Edwin K. P., committee member; Cooley, Daniel S., committee memberTomography approaches are of great interests because of their non-intrusive nature and their ability to generate a significantly larger amount of data in comparison to the in-situ measurement method. Acoustic tomography is an approach which reconstructs the unknown parameters that affect the propagation of acoustic rays in a field of interest by studying the temporal characteristics of the propagation. Acoustic tomography has been used in several different disciplines such as biomedical imaging, oceanographic studies and atmospheric studies. The focus of this thesis is to study acoustic tomography of the atmosphere in order to reconstruct the temperature and wind velocity fields in the atmospheric surface layer using the travel-times collected from several pairs of transmitter and receiver sensors distributed in the field. Our work consists of three main parts. The first part of this thesis is dedicated to reviewing the existing methods for acoustic tomography of the atmosphere, namely statistical inversion (SI), time dependent statistical inversion (TDSI), simultaneous iterative reconstruction technique (SIRT), and sparse recovery framework. The properties of these methods are then explained extensively and their shortcomings are also mentioned. In the second part of this thesis, a new acoustic tomography method based on Unscented Kalman Filter (UKF) is introduced in order to address some of the shortcomings of the existing methods. Using the UKF, the problem is cast as a state estimation problem in which the temperature and wind velocity fields are the desired states to be reconstructed. The field is discretized into several grids in which the temperature and wind velocity fields are assumed to be constant. Different models, namely random walk, first order 3-D autoregressive (AR) model, and 1-D temporal AR model are used to capture the state evolution in time-space . Given the time of arrival (TOA) equation for acoustic propagation as the observation equation, the temperature and wind velocity fields are then reconstructed using a fixed point iterative UKF. The focus in the third part of this thesis is on generating a meaningful synthetic data for the temperature and wind velocity fields to test the proposed algorithms. A 2-D Fractal Brownian motion (fBm)-based method is used in order to generate realizations of the temperature and wind velocity fields. The synthetic data is generated for 500 subsequent snapshots of wind velocity and temperature field realizations with spatial resolution of one meter and temporal resolution of 12 seconds. Given the location of acoustic sensors the TOA&rsquos are calculated for all the acoustic paths. In addition, white Gaussian noise is added to the calculated TOAs in order to simulate the measurement error. The synthetic data is then used to test the proposed method and the results are compared to those of the TDSI method. This comparison attests to the superiority of the proposed method in terms of accuracy of reconstruction, real-time processing and the ability to track the temporal changes in the data.Item Open Access An analysis of combinatorial search spaces for a class of NP-hard problems(Colorado State University. Libraries, 2011) Sutton, Andrew M., author; Whitley, L. Darrell, advisor; Howe, Adele E., advisor; Chong, Edwin K. P., committee member; Bohm, A. P. Willem, committee memberGiven a finite but very large set of states X and a real-valued objective function ƒ defined on X, combinatorial optimization refers to the problem of finding elements of X that maximize (or minimize) ƒ. Many combinatorial search algorithms employ some perturbation operator to hill-climb in the search space. Such perturbative local search algorithms are state of the art for many classes of NP-hard combinatorial optimization problems such as maximum k-satisfiability, scheduling, and problems of graph theory. In this thesis we analyze combinatorial search spaces by expanding the objective function into a (sparse) series of basis functions. While most analyses of the distribution of function values in the search space must rely on empirical sampling, the basis function expansion allows us to directly study the distribution of function values across regions of states for combinatorial problems without the need for sampling. We concentrate on objective functions that can be expressed as bounded pseudo-Boolean functions which are NP-hard to solve in general. We use the basis expansion to construct a polynomial-time algorithm for exactly computing constant-degree moments of the objective function ƒ over arbitrarily large regions of the search space. On functions with restricted codomains, these moments are related to the true distribution by a system of linear equations. Given low moments supplied by our algorithm, we construct bounds of the true distribution of ƒ over regions of the space using a linear programming approach. A straightforward relaxation allows us to efficiently approximate the distribution and hence quickly estimate the count of states in a given region that have certain values under the objective function. The analysis is also useful for characterizing properties of specific combinatorial problems. For instance, by connecting search space analysis to the theory of inapproximability, we prove that the bound specified by Grover's maximum principle for the Max-Ek-Lin-2 problem is sharp. Moreover, we use the framework to prove certain configurations are forbidden in regions of the Max-3-Sat search space, supplying the first theoretical confirmation of empirical results by others. Finally, we show that theoretical results can be used to drive the design of algorithms in a principled manner by using the search space analysis developed in this thesis in algorithmic applications. First, information obtained from our moment retrieving algorithm can be used to direct a hill-climbing search across plateaus in the Max-k-Sat search space. Second, the analysis can be used to control the mutation rate on a (1+1) evolutionary algorithm on bounded pseudo-Boolean functions so that the offspring of each search point is maximized in expectation. For these applications, knowledge of the search space structure supplied by the analysis translates to significant gains in the performance of search.Item Open Access Channel coding for network communication: an information theoretic perspective(Colorado State University. Libraries, 2011) Wang, Zheng, author; Luo, J. Rockey, advisor; Scharf, Louis L., committee member; Chong, Edwin K. P., committee member; Betten, Anton, committee memberChannel coding helps a communication system to combat noise and interference by adding "redundancy" to the source message. Theoretical fundamentals of channel coding in point-to-point systems have been intensively studied in the research area of information theory, which was proposed by Claude Shannon in his celebrated work in 1948. A set of landmark results have been developed to characterize the performance limitations in terms of the rate and the reliability tradeoff bounds. However, unlike its success in point-to-point systems, information theory has not yielded as rich results in network communication, which has been a key research focus over the past two decades. Due to the limitations posed by some of the key assumptions in classical information theory, network information theory is far from being mature and complete. For example, the classical information theoretic model assumes that communication parameters such as the information rate should be jointly determined by all transmitters and receivers. Communication should be carried out continuously over a long time such that the overhead of communication coordination becomes negligible. The communication channel should be stationary in order for the coding scheme to transform the channel noise randomness into deterministic statistics. These assumptions are valid in a point-to-point system, but they do not permit an extensive application of channel coding in network systems because they have essentially ignored the dynamic nature of network communication. Network systems deal with bursty message transmissions between highly dynamic users. For various reasons, joint determination of key communication parameters before message transmission is often infeasible or expensive. Communication channels can often be non-stationary due to the dynamic communication interference generated by the network users. The objective of this work is to extend information theory toward network communication scenarios. We develop new channel coding results, in terms of the communication rate and error performance tradeoff, for several non-classical communication models, in which key assumptions made in classical channel coding are dropped or revised.Item Open Access Continuity of object tracking(Colorado State University. Libraries, 2022) Williams, Haney W., author; Simske, Steven J., advisor; Azimi-Sadjadi, Mahmood R., committee member; Chong, Edwin K. P., committee member; Beveridge, J. Ross, committee memberThe demand for object tracking (OT) applications has been increasing for the past few decades in many areas of interest: security, surveillance, intelligence gathering, and reconnaissance. Lately, newly-defined requirements for unmanned vehicles have enhanced the interest in OT. Advancements in machine learning, data analytics, and deep learning have facilitated the recognition and tracking of objects of interest; however, continuous tracking is currently a problem of interest to many research projects. This dissertation presents a system implementing a means to continuously track an object and predict its trajectory based on its previous pathway, even when the object is partially or fully concealed for a period of time. The system is divided into two phases: The first phase exploits a single fixed camera system and the second phase is composed of a mesh of multiple fixed cameras. The first phase system is composed of six main subsystems: Image Processing, Detection Algorithm, Image Subtractor, Image Tracking, Tracking Predictor, and the Feedback Analyzer. The second phase of the system adds two main subsystems: Coordination Manager and Camera Controller Manager. Combined, these systems allow for reasonable object continuity in the face of object concealment.Item Open Access Design of a compact integrated high-power superconducting radio frequency electron beam source and klystron-inspired terahertz power source(Colorado State University. Libraries, 2018) Sipahi, Nihan, author; Maciejewski, Anthony A., advisor; Collins, George J., committee member; Chong, Edwin K. P., committee member; Buchanan, Norm, committee memberThere exists a need for compact, reliable, high-power electron sources for applications including those in industry, basic science, medical science and security. There also exists a need for compact electron-beam based light and power sources of various power levels and at different frequencies (mm-wave to gamma rays) for applications also in the fields of basic science, industry, and security. Today's examples of high-average-power electron sources are neither very compact nor highly efficient. The same may be said for many of the electron-beam based light sources operated worldwide for a myriad of applications. Recent breakthroughs in superconducting (SC) materials technology, radio-frequency (RF) power systems, specialized cathodes, and RF cavity designs offer ways to overcome the above-mentioned shortcomings. In this dissertation, all of these new features are integrated in a comprehensive design into one promising concept for a compact superconducting RF (SRF) high-average power electron linear accelerator. This integrated design is capable of 5-50 kW average electron beam power and continuous-wave operation with the corresponding electron beam energy up to 10 MeV. In addition, the community also has a need for compact sources for many different wavelength regimes, as well as a variety of peak and average powers. Specifically, we are also exploring a novel continuous wave terahertz source designed from using basic principles of the beam manipulation methods used in free-electron laser (FEL) light sources.Item Open Access Design, fabrication and testing of an electrically controlled microfluidic capillary microvalve based on hydrophobicity(Colorado State University. Libraries, 2019) Kulkarni, Gitesh S., author; Chen, Thomas W., advisor; Chong, Edwin K. P., committee member; Geiss, Brian, committee memberMicrofluidics is a promising disciple that combines "micro" amount of fluid handling in "micro" sized channels and has found applications in diverse fields such as biotechnology and environmental monitoring. Combination of microfluidics with digital electronics technology has spurred creation of Lab-on-a-Chip (LOC) devices that are field-deployable and bought to market in the last few decades. In these devices, positioning/transportation of liquids has remained a critical issue. A sample of fluid needs to be acquired from a specimen reservoir and moved to a different reservoir location for analysis. Inexpensive, reliable and straightforward methods to do this transportation makes such instruments low-cost and robust for use in the field for a variety of purposes. Current ways to do fluid movement require high electric field and hence requiring the use of high voltages (thousands of volts), making the device bulkier. Another approach to use a pneumatic pump for droplet movement is also detrimental in making LoC devices portable due to sizes of associated electronics and electrical parts. This thesis presents the design of a microfluidic valve using capillary action, hydrophobicity, and low voltages (several volts). The use of low voltages brings the "micro" realm to the digital electronics part of LOC. It could lead to better portability, low-power operation of LOC devices, and consequently more adoption in field applications. The design process is based on practical considerations found during experimentation. This method was tested, and results are presented for various biochemical mediums, including KCl, PBS, GMOPS, Cell culture and FBS.Item Open Access Discovering and harnessing structures in solving application satisfiability instances(Colorado State University. Libraries, 2018) Chen, Wenxiang, author; Whitley, L. Darrell, advisor; Draper, Bruce A., committee member; Böhm, A. P. Wim, committee member; Chong, Edwin K. P., committee memberBoolean satisfiability (SAT) is the first problem proven to be NP-Complete. It has become a fundamental problem for computational complexity theory, and many real-world problems can be encoded as SAT instances. Two major search paradigms have been proposed for SAT solving: Systematic Search (SS) and Stochastic Local Search (SLS). SLS solvers have been shown to be very effective at uniform random instances; SLS solvers are consistently the top winning entries for random tracks at SAT competitions. However, SS solvers dominate hard combinatorial tracks and industrial tracks at SAT competitions, with SLS entries being at the very bottom of the ranking. In this work, we classify both hard combinatorial instances and industrial instances as Application Instances. As application instances are more interesting from a practical perspective, it is critical to analyze the structures in application instances as well as to improve SLS on application instances. We focus on two structural properties of SAT instances in this work: variable interaction topology and subproblem constrainedness. Decomposability focuses on how well the variable interaction of an application instance can be decomposed. We first show that many application instances are indeed highly decomposable. The decomposability of a SAT instance have been extensively exploited with success by SS solvers. Meanwhile, SLS solvers direct the variables to flip using only the objective function, and are completely oblivious of the decomposability of application instances that is inherent to the original problem domain. We propose a new method to decompose variable interactions within SLS solvers, leveraging numerous visited local optima. Our empirical study suggests that the proposed method can vastly simplify SAT instances, which further results in decomposing the instances into thousands of connected components. Furthermore, we demonstrate the utility of the decomposition, in improving SLS solvers. We propose a new framework called PXSAT, based on the recombination operator Partition Crossover (PX). Given q components, PX is able to find the best of 2q possible candidate solutions in linear time. Empirical results on an extensive set of application instances show PXSAT can yield statistically significantly better results. We improve two of best local search solvers, AdaptG2WSAT and Sparrow. PXSAT combined with AdaptG2WSAT is also able to outperform CCLS, winners of several recent MAXSAT competitions. The other structural property we study is subproblem constrainedness. We observe that, on some application SAT instance classes, the original problem can be partitioned into several subproblems, where each subproblems is highly constrained. While subproblem constrainedness has been exploited in SS solvers before, we propose to exploit it in SLS solvers using two alternative representations that can be obtained efficiently based on the canonical CNF representation. Our empirical results show that the new alternative representative enables a simple SLS solver to outperform several sophisticated and highly optimized SLS solvers on the SAT encoding of semiprime factoring problem.Item Open Access Kinematic design and motion planning of fault tolerant robots with locked joint failures(Colorado State University. Libraries, 2019) Xie, Biyun, author; Maciejewski, Anthony A., advisor; Chong, Edwin K. P., committee member; Pezeshki, Ali, committee member; Zhao, Jianguo, committee memberThe problem of kinematic design and motion planning of fault tolerant robots with locked joint failure is studied in this work. In kinematic design, the problem of designing optimally fault tolerant robots for equal joint failure probabilities is first explored. A measure of local fault tolerance for equal joint failure probabilities has previously been defined based on the properties of the singular values of the Jacobian matrix. Based on this measure, one can determine a Jacobian that is optimal. Because these measures are solely based on the singular values of the Jacobian, permutation of the columns does not affect the optimality. Therefore, when one generates a kinematic robot design from this optimal Jacobian, there will be 7! robot designs with the same locally optimal fault tolerant property. This work shows how to analyze and organize the kinematic structure of these 7! designs in terms of their Denavit and Hartenberg (DH) parameters. Furthermore, global fault tolerant measures are defined in order to evaluate the different designs. It is shown that robot designs that are very similar in terms of DH parameters, e.g., robots generated from Jacobians where the columns are in reverse order, can have very different global properties. Finally, a computationally efficient approach to calculate the global pre- and post-failure dexterity measures is presented and used to identify two Pareto optimal robot designs. The workspaces for these optimal designs are also shown. Then, the problem of designing optimally fault tolerant robots for different joint failure probabilities is considered. A measure of fault tolerance for different joint failure probabilities is defined based on the properties of the singular values of the Jacobian after failures. Using this measure, methods to design optimally fault tolerant robots for an arbitrary set of joint failure probabilities and multiple cases of joint failure probabilities are introduced separately. Given an arbitrary set of joint failure probabilities, the optimal null space that optimizes the fault tolerant measure is derived, and the associated isotropic Jacobians are constructed. The kinematic parameters of the optimally fault tolerant robots are then generated from these Jacobians. One special case, i.e., how to construct the optimal Jacobian of spatial 7R robots for both positioning and orienting is further discussed. For multiple cases of joint failure probabilities, the optimal robot is designed through optimizing the sum of the fault tolerant measures for all the possible joint failure probabilities. This technique is illustrated on planar 3R robots, and it is shown that there exists a family of optimal robots. After the optimally fault tolerant robots are designed, the problem of planning the optimal trajectory with minimum probability of task failure for a set of point-to-point tasks, after experiencing locked joint failures, is studied. The proposed approach first develops a method to calculate the probability of task failure for an arbitrary trajectory, where the trajectory is divided into small segments, and the probability of task failure of each segment is calculated based on its failure scenarios. Then, a motion planning algorithm is proposed to find the optimal trajectory with minimum probability of task failure. There are two cases. The trajectory in the first case is the optimal trajectory from the start configuration to the intersection of the bounding boxes of all the task points. In the other case, all the configurations along the self-motion manifold of task point 1 need to be checked, and the optimal trajectory is the trajectory with minimum probability of task failure among them. The proposed approach is demonstrated on planar 2R redundant robots, illustrating the effectiveness of the algorithm.Item Open Access Kinematic design of redundant robotic manipulators that are optimally fault tolerant(Colorado State University. Libraries, 2014) Ben-Gharbia, Khaled M., author; Maciejewski, Anthony A., advisor; Chong, Edwin K. P., committee member; Roberts, Rodney G., committee member; Oprea, Iuliana, committee memberIt is common practice to design a robot's kinematics from the desired properties that are locally specified by a manipulator Jacobian. Conversely, one can determine a manipulator that possesses certain desirable kinematic properties by specifying the required Jacobian. For the case of optimality with respect to fault tolerance, one common definition is that the post-failure Jacobian possesses the largest possible minimum singular value over all possible locked-joint failures. This work considers Jacobians that have been designed to be optimally fault tolerant for 3R and 4R planar manipulators. It also considers 4R spatial positioning manipulators and 7R spatial manipulators. It has been shown in each case that multiple different physical robot kinematic designs can be obtained from (essentially) a single Jacobian that has desirable fault tolerant properties. In the first part of this dissertation, two planar examples, one that is optimal to a single joint failure and the second that is optimal to two joint failures, are analyzed. A mathematical analysis that describes the number of possible planar robot designs for optimally fault-tolerant Jacobians is presented. In the second part, the large family of physical spatial positioning manipulators that can achieve an optimally failure tolerant configuration are parameterized and categorized. The different categories of manipulator designs are then evaluated in terms of their global kinematic properties, with an emphasis on failure tolerance. Several manipulators with a range of desirable kinematic properties are presented and analyzed. In the third part, 7R manipulators that are optimized for fault tolerance for fully general spatial motion are discussed. Two approaches are presented for identifying a physically feasible 7R optimally fault tolerant Jacobian. A technique for calculating both reachable and fault tolerant six-dimensional workspace volumes is presented. Different manipulators are analyzed and compared. In both the planar and spatial cases, the analyses show that there are large variabilities in the global kinematic properties of these designs, despite being generated from the same Jacobian. One can select from these designs to optimize additional application-specific performance criteria.Item Open Access Madden-Julian oscillation teleconnections and their influence on Northern Hemisphere winter blocking(Colorado State University. Libraries, 2017) Henderson, Stephanie A., author; Maloney, Eric D., advisor; Barnes, Elizabeth A., committee member; Thompson, David W. J., committee member; Chong, Edwin K. P., committee memberWinter blocking events are characterized by persistent and quasi-stationary patterns that re-direct precipitation and air masses, leading to long-lasting extreme winter weather. Studies have shown that the teleconnection patterns forced by the primary mode of tropical intraseasonal variability, the Madden-Julian Oscillation (MJO), influence extratropical factors associated with blocking, such as the North Atlantic Oscillation. However, the influence of the MJO on winter blocking is not well understood. Understanding this relationship may improve the mid-range forecasting of winter blocking and the associated weather extremes. The impact of the MJO on Northern Hemisphere winter blocking is examined using a two-dimensional blocking index. Results suggest that all MJO phases demonstrate significant changes in west and central Pacific high-latitude blocking. East Pacific and Atlantic blocking are significantly suppressed following phase 3 of the MJO, characterized by anomalous convection in the tropical East Indian Ocean and suppressed convection in the west Pacific. A significant increase in east Pacific and Atlantic blocking follows the opposite-signed MJO heating during MJO phase 7. Over Europe, blocking is suppressed following MJO phase 4 and significantly increased after MJO phase 6. Results suggest that the European blocking increase may be due to two precursors: 1) a pre-existing anomalous Atlantic anticyclone, and 2) a negative Pacific North American (PNA) pattern triggered by the MJO. The influence of the MJO on winter blocking may be different if a change occurs to the basic state and/or MJO heating, such as during El Niño – Southern Oscillation (ENSO) events. MJO teleconnections during ENSO events are examined using composite analysis and a nonlinear baroclinic model and their influence of winter high-latitude blocking is discussed. Results demonstrate that the ENSO-altered MJO teleconnection patterns significantly influence Pacific and Atlantic blocking and the impacts depend on ENSO phase. During El Niño, Pacific and Atlantic blocking is significantly increased following MJO phase 7, with maximum Atlantic blocking frequency anomalies reaching triple the climatological winter mean blocking frequency. Results suggest that the MJO forces the initial anomalous Atlantic dipole associated with the blocking increase, and transient eddy activity aids in its persistence. During La Niña, significant changes to high-latitude blocking are mostly observed during the first half of an MJO event, with significant suppression of Pacific and Atlantic blocking following MJO phase 3. MJO teleconnection patterns may also be altered by basic state and MJO heating biases in General Circulation Models (GCMs), important for mid-range forecasting and future climate studies of weather and climate patterns significantly altered by the MJO, such as winter blocking. Data from phase 5 of the Coupled Model Intercomparison Project (CMIP5) is used to investigate MJO teleconnection biases due to basic state and MJO biases, and a linear baroclinic model is used to interpret the results. Results indicate that poor basic state GCMs (but with a good MJO) can have equally poor skill in simulating the MJO teleconnection patterns as GCMs with a poor MJO. Large biases in MJO teleconnection patterns occur in GCMs with a zonally extended Pacific subtropical jet relative to reanalysis. In good MJO GCMs, bias in the location and horizontal structure of Indo-Pacific MJO heating is found to have modest impacts on MJO teleconnection patterns. However, East Pacific heating during MJO events can influence MJO teleconnection amplitude and the pathways over North America. Results suggest that both the MJO and the basic state must be well represented in order to properly capture the MJO teleconnection patterns.Item Open Access Microgrid optimization, modelling and control(Colorado State University. Libraries, 2014) Han, Yi, author; Yount, Peter M., advisor; Chong, Edwin K. P., committee member; Pezeshki, Ali, committee member; Anderson, Chuck, committee memberTo view the abstract, please see the full text of the document.Item Open Access Modeling and improving urban human mobility in disaster scenarios(Colorado State University. Libraries, 2020) Zou, Qiling, author; Chen, Suren, advisor; Heyliger, Paul, committee member; van de Lindt, John W., committee member; Chong, Edwin K. P., committee memberNatural and human-made disasters, such as earthquake, tsunami, fire, and terrorist attack, can disrupt the normal daily mobility patterns, posing severe risks to human lives and resulting in tremendous economic losses. Recent disaster events show that insufficient consideration of human mobility behavior may lead to erroneous, ineffective, and costly disaster mitigation and recovery decisions for critical infrastructure, and then the same tragedies may reoccur when facing future disasters. The objective of this dissertation is to develop advanced modeling and decision-making methodologies to investigate the urban human mobility in disaster scenarios. It is expected that the proposed methodologies in this dissertation will help stakeholders and researchers gain a better understanding of emergency human behavior, evaluate the performance of disrupted infrastructure, and devise effective safety management and resilience enhancement strategies. Focusing on the two important mobility modes (i.e., walking and driving) in urban environment, this dissertation (1) develops agent-based crowd simulation models to evaluate the crowd dynamics in complex subway station environment and investigate the interplay among emotion contagion, information diffusion, decision-making process, and egress behavior under a toxic gas incident; (2) develops functionality modeling, interdependency characterization, and decision models to assess and enhance the resilience of transportation networks subject to hazards.Item Open Access Ontological deflationism: plural quantification, mereological collections, and quantifier variance(Colorado State University. Libraries, 2011) Lightfield, Ceth, author; Losonsky, Michael, advisor; Chong, Edwin K. P., committee member; Sarenac, Darko, committee memberOne criticism by deflationists about ontology is that ontological debates about composite material objects are merely verbal. That is, there is only apparent disagreement between the debating ontologists. In responding to such a deflationist view, Theodore Sider (2009) has argued that there is genuine disagreement between two ontologists concerning the ontological status of tables. In doing so, Sider has written that, using plural quantification, a mereological nihilist can grant the proposition 'There exist simples arranged tablewise' while denying the proposition 'There exist collections of simples arranged tablewise'. In the first chapter, I argue that Sider's response to the deflationist is unsuccessful for two reasons. The first is that plural quantification is not ontologically innocent. A semantic interpretation of a logical formula involving plural quantification will reveal a problematic locution, namely, 'one of them' where `them' has a collection as its referent. The second concern with Sider's response is that the predicate 'arranged tablewise' is collective rather than distributive. A collection is needed to instantiate a collective predicate; thus, a commitment to simples arranged tablewise entails a commitment to a collection of simples arranged tablewise. In responding to the ontological deflationist, Sider discusses a debate between David Lewis and Peter van Inwagen about the existence of tables where a table is interpreted as a collection of simples arranged tablewise. As part of his discussion, Sider claims that Lewis and van Inwagen agree on what counts as a table. Sider allows that the deflationist may have three candidate interpretations for what counts as a 'table', but none will support the deflationist conclusion. In the second chapter, I address each candidate interpretation: (1) using Composition as Identity - a table is simples arranged tablewise, (2) a table is a set-theoretic collection of simples arranged tablewise, and (3) using Unrestricted Composition - a table is a mereological collection of simples arranged tablewise. I argue against Lewis's argument for Composition as Identity and defend an argument by Sider in support of Unrestricted Composition. Thus, I argue that composition is unrestricted and not ontologically innocent. In doing so, I show that van Inwagen cannot grant 'There exist simples arranged tablewise' and deny the existence of tables. Thus, I show that, independent of plural quantification concerns, Sider is not successful in refuting the deflationist conclusion that the ontologists are equivocating on the word 'table'. Finally, in the third chapter, I address Sider's response to the deflationist claims that the ontologists are equivocating on the quantifier 'there exists'. I look at Sider's presentation of the argument and his response which centers on an appeal to naturalness. Relying on Eli Hirsch's defense of quantifier variance, I show that the deflationist position can be maintained if Sider's appeal to naturalness is rejected. Additionally, I argue that Sider's constructed ideal language, Ontologese, does not allow Sider to avoid the deflationist criticisms. I also address the question of whether or not the deflationist program applies not only to ontological debates, but also to meta-ontological debates. To that end, I evaluate Gerald Marsh's (2010) meta-meta-ontological discussion in which he defends a dilemma for the Hirsch-Sider debate. I argue that Marsh's defense of the dilemma is problematic, and highlight a wider concern I have about meta-meta-ontological debates. I suggest that there is a frame of reference problem and end with the skeptical conclusion that answers at the meta-meta-ontological level are dependent on the language used to frame the debate.Item Open Access Optimal stochastic scheduling of restoration of infrastructure systems from hazards: an approximate dynamic programming approach(Colorado State University. Libraries, 2019) Nozhati, Saeed, author; Ellingwood, Bruce R., advisor; Mahmoud, Hussam N., advisor; Chong, Edwin K. P., committee member; van de Lindt, John W., committee memberThis dissertation introduces approximate dynamic programming (ADP) techniques to identify near-optimal recovery strategies following extreme natural hazards. The proposed techniques are intended to support policymakers, community stakeholders, and public or private entities to manage the restoration of critical infrastructure of a community following disasters. The computation of optimal scheduling schemes in this study employs the rollout algorithm, which provides an effective computational tool for optimization problems dealing with real-world large-scale networks and communities. The Markov decision process (MDP)-based optimization approach incorporates different sources of uncertainties to compute the restoration policies. The fusion of the proposed rollout method with metaheuristic algorithms and optimal learning techniques to overcome the computational intractability of large-scale, multi-state communities is probed in detail. Different risk attitudes of policymakers, which include risk-neutral and riskaverse attitudes in community recovery management, are taken into account. The context for the proposed framework is provided by objectives related to minimizing foodinsecurity issues and impacts within a small community in California following an extreme earthquake. Probabilistic food security metrics, including food availability, accessibility, and affordability, are defined and quantified to provide risk-informed decision support to policymakers in the aftermath of an extreme natural hazard. The proposed ADP-based approach then is applied to identify practical policy interventions to hasten the recovery of food systems and reduce the adverse impacts of food insecurity on a community. All proposed methods in this study are applied on a testbed community modeled after Gilroy, California, United States, which is impacted by earthquakes on the San Andreas Fault. Different infrastructure systems, along with their spatial distributions, are modeled as part of the evaluation of the restoration of food security within that community. The methods introduced are completely independent of the initial condition of a community following disasters and type of community (network) simulation. They treat the built environment like a black box, which means the simulation and consideration of any arbitrary network and/or sector of a community do not affect the applicability and quality of the framework. Therefore, the proposed methodologies are believed to be adaptable to any infrastructure systems, hazards, and policymakers' preferences.Item Open Access Parameter estimation from compressed and sparse measurements(Colorado State University. Libraries, 2015) Pakrooh, Pooria, author; Pezeshki, Ali, advisor; Scharf, Louis L., advisor; Chong, Edwin K. P., committee member; Luo, J. Rockey, committee member; Peterson, Chris, committee memberIn this dissertation, the problem of parameter estimation from compressed and sparse noisy measurements is studied. First, fundamental estimation limits of the problem are analyzed. For that purpose, the effect of compressed sensing with random matrices on Fisher information, the Cramer-Rao Bound (CRB) and the Kullback-Leibler divergence are considered. The unknown parameters for the measurements are in the mean value function of a multivariate normal distribution. The class of random compression matrices considered in this work are those whose distribution is right-unitary invariant. The compression matrix whose elements are i.i.d. standard normal random variables is one such matrix. We show that for all such compression matrices, the Fisher information matrix has a complex matrix beta distribution. We also derive the distribution of CRB. These distributions can be used to quantify the loss in CRB as a function of the Fisher information of the non-compressed data. In our numerical examples, we consider a direction of arrival estimation problem and discuss the use of these distributions as guidelines for deciding whether compression should be considered, based on the resulting loss in performance. Then, the effect of compression on performance breakdown regions of parameter estimation methods is studied. Performance breakdown may happen when either the sample size or signal-to-noise ratio (SNR) falls below a certain threshold. The main reason for this threshold effect is that in low SNR or sample size regimes, many high resolution parameter estimation methods, including subspace methods as well as maximum likelihood estimation lose their capability to resolve signal and noise subspaces. This leads to a large error in parameter estimation. This phenomenon is called a subspace swap. The probability of a subspace swap for parameter estimation from compressed data is studied. A lower bound has been derived on the probability of a subspace swap in parameter estimation from compressed noisy data. This lower bound can be used as a tool to predict breakdown for different compression schemes at different SNRs. In the last part of this work, we look at the problem of parameter estimation for p damped complex exponentials, from the observation of their weighted and damped sum. This problem arises in spectrum estimation, vibration analysis, speech processing, system identification, and direction of arrival estimation. Our results differ from standard results of modal analysis to the extent that we consider sparse and co-prime samplings in space, or equivalently sparse and co-prime samplings in time. Our main result is a characterization of the orthogonal subspace. This is the subspace that is orthogonal to the signal subspace spanned by the columns of the generalized Vandermonde matrix of modes in sparse or coprime arrays. This characterization is derived in a form that allows us to adapt modern methods of linear prediction and approximate least squares for estimating mode parameters. Several numerical examples are presented to demonstrate the performance of the proposed modal estimation methods. Our calculations of Fisher information allow us to analyze the loss in performance sustained by sparse and co-prime arrays that are compressions of uniform linear arrays.Item Open Access Protecting critical services from DDoS attacks(Colorado State University. Libraries, 2012) Kambhampati, Vamsi K., author; Massey, Daniel, advisor; Papadopoulos, Christos, advisor; Strout, Michelle M., committee member; Chong, Edwin K. P., committee memberCritical services such as emergency response, industrial control systems, government and banking systems are increasing coming under threat from Distributed Denial of Service (DDoS) attacks. To protect such services, in this dissertation we propose Epiphany, an architecture that hides the service IP address making it hard for an attacker to find, attack and disable the service. Like other location hiding based approaches, Epiphany provides access to the service through numerous lightweight proxies, which present a very wide target for the attacker. However, unlike these solutions Epiphany uses a novel approach to hide the service from both clients and proxies, thus eliminating the need to trust proxies or apply a filtering perimeter around the service destination. The approach uses dynamically generated hidden paths that are fully controlled by the service, so if a specific proxy misbehaves or is attacked, it can be promptly removed. Since the service cannot be targeted directly, the attacker may target the proxy infrastructure. To combat such threats, Epiphany separates the proxies into setup and data proxies. Setup proxies are only responsible for letting a client make initial contact with the service, while data proxies provide further access to the service. However, the setup proxies employ IP anycast to isolate the network into distinct regions. Connection requests generated in a region bounded by an anycast setup proxy are automatically directed to that proxy. This way, the attacker botnet becomes dispersed, i.e., the attacker cannot combine bots from different regions to target setup proxies in specific networks. By adding more anycast setup proxies, networks that only have legitimate clients can be freed from the perils of unclean networks (i.e., networks with attackers). Moreover, the attacker activity becomes more exposed in these unclean networks, upon which the operators may take further action such as remove them or block them until the problem is resolved. Epiphany data proxies are kept private; the service can assign different data proxies to distinct clients depending on how they are trusted. The attacker cannot disrupt on-going communication of a client who's data proxy it does not know. We evaluate the effectiveness of Epiphany defenses using simulations on an Internet scale topology, and two different implementations involving real Internet routers and an overlay on PlanetLab.Item Open Access Second-order sub-array Cartesian product split-plot design(Colorado State University. Libraries, 2015) Cortés-Mestres, Luis A., author; Duff, William S., advisor; Simpson, James R., advisor; Chong, Edwin K. P., committee member; Bradley, Thomas H., committee member; Jathar, Shantanu H., committee memberFisher (1926) laid down the fundamental principles of design of experiments: factorization, replication, randomization, and local control of error. In industrial experiments, however, departure from these principles is commonplace. Many industrial experiments involve situations in which complete randomization may not be feasible because the factor level settings are impractical or inconvenient to change, the resources available to complete the experiment in homogenous settings are limited, or both. Restricted randomization due to factor levels that are impractical or inconvenient to change can lead to a split-plot experiment. Restricted randomization due to resource limitation can lead to blocking. Situations that require fitting a second-order model under those conditions lead to a second-order block split-plot experiment. Although response surface methodology has experienced a phenomenal growth since Box and Wilson (1951), the departure from standard methods to tackle second-order block split-plot design remains, for the most part, unexplored. Most graduate textbooks only provide a relatively basic treatise of the subject. Peer-reviewed literature is scarce, has a limited number of examples, and provides guidelines that often are too general. This deficit of information leaves practitioners ill prepared to face the roadblocks illuminated by Simpson, Kowalski, and Landman (2004). Practical strategies to help practitioners in dealing with the challenges presented by second-order block split-plot design are provided, including an end-to-end, innovative approach for the construction of a new form of effective and efficient response surface design referred to as second-order sub-array Cartesian product split-plot design. This new form of design is an alternative to ineffective split-plot designs that are currently in use by the manufacturing and quality control community. The design is economical, the prediction variance of the regression coefficients is low and stable, and the aliasing between the terms in the model and effects that are not in the model as well as the correlation between similar effects that are not in the model is low. Based on an assessment using well-accepted key design evaluation criterion, it is demonstrated that second-order sub-array Cartesian product split-plot designs perform as well or better than historical designs that have been considered standards up to this point.Item Open Access Shake table testing of hybrid wood shear wall system(Colorado State University. Libraries, 2019) Anandan, Yeshwant Kumar, author; van de Lindt, John, advisor; Jia, Gaofeng, committee member; Chong, Edwin K. P., committee memberCross-Laminated Timber (CLT) is an engineered, prefabricated mass timber product that has shown excellent structural and mechanical properties. With the growing application of CLT in industry, there have been a number of research projects carried out to introduce CLT in tall buildings located in high seismic regions. The concept of post-tensioning mass timber has been adopted from concrete systems and this led to development of seismically resilient structural systems that can undergo multiple earthquake and continue to re-center. This thesis presents the results of a shake table test program that focused on testing of a one-story full-scale hybrid wood shear wall system comprised of a post-tensioned CLT wall panel with Light-frame wood shear (LiFS) wall panels on each side. The testing was conducted at CSU's Engineering Research Center shake table. The objective of this study was to combine the advantages of the post-tensioned CLT systems with those of LiFS walls. The hybrid shear wall system in the testing had two LiFS walls on either side of a post-tensioned rocking CLT wall panel. Mild steel rods were used as post-tensioning rods in this experiment and the test structure also included gravity frames constructed with wood studs (but no sheathing) and a CLT floor diaphragm to support a seismic weight of 12,000 lbs. The structure was subjected to the 1989 Loma Prieta ground motion record scaled to different intensities. The final test used the original 1994 Northridge ground motion record from the Rinaldi record station, with a slight reduction to be able to be accommodated by the 20 inch shake table actuator stroke. This test was conducted to understand the collapse mechanism of the structure and demonstrated the ability of the post-tensioned CLT to re-center the structure after 5% inter-story drifts and also the ability of the LiFS walls to act as energy dissipation and lateral force resisting systems.