Browsing by Author "Oprea, Iuliana, committee member"
Now showing 1 - 20 of 22
Results Per Page
Sort Options
Item Open Access A multi-task learning method using gradient descent with applications(Colorado State University. Libraries, 2021) Larson, Nathan Dean, author; Azimi-Sadjadi, Mahmood R., advisor; Pezeshki, Ali, committee member; Oprea, Iuliana, committee memberThere is a critical need to develop classification methods that can robustly and accurately classify different objects in varying environments. Each environment in a classification problem can contain its own unique challenges which prevent traditional classifiers from performing well. To solve classification problems in different environments, multi-task learning (MTL) models have been applied that define each environment as a separate task. We discuss two existing MTL algorithms and explain how they are inefficient for situations involving high-dimensional data. A gradient descent-based MTL algorithm is proposed which allows for high-dimensional data while providing accurate classification results. Additionally, we introduce a kernelized MTL algorithm which may allow us to generate nonlinear classifiers. We compared our proposed MTL method with an existing method, Efficient Lifelong Learning Algorithm (ELLA), by using them to train classifiers on the underwater unexploded ordnance (UXO) and extended modified National Institute of Standards and Technology (EMNIST) datasets. The UXO dataset contained acoustic color features of low-frequency sonar data. Both real data collected from physical experiments as well as synthetic data were used forming separate environments. The EMNIST digits dataset contains grayscale images of handwritten digits. We used this dataset to show how our proposed MTL algorithm performs when used with more tasks than are in the UXO dataset. Our classification experiments showed that our gradient descent-based algorithm resulted in improved performance over those of the traditional methods. The UXO dataset had a small improvement while the EMNIST dataset had a much larger improvement when using our MTL algorithm compared to ELLA and the single task learning method.Item Open Access A parametric classification of directed acyclic graphs(Colorado State University. Libraries, 2017) Chaturvedi, Mmanu, author; McConnell, Ross M., advisor; Kirby, Michael J., committee member; Rajopadhye, Sanjay V., committee member; Oprea, Iuliana, committee memberWe consider four NP-hard optimization problems on directed acyclic graphs (DAGs), namely, max clique, min coloring, max independent set and min clique cover. It is well-known that these four problems can be solved in polynomial time on transitive DAGs. It is also known that there can be no polynomial O(n1-ϵ)-approximation algorithms for these problems on the general class of DAGs unless P = NP. We propose a new parameter, β, as a measure of departure from transitivity for DAGs. We define β to be the number of vertices in a longest path in a DAG such that there is no edge from the first to the last vertex of the path, and 2 if the graph is transitive. Different values of β define a hierarchy of classes of DAGs, starting with the class of transitive DAGs. We give a polynomial time algorithm for finding a max clique when β is bounded by a fixed constant. The algorithm is exponential in β, but we also give a polynomial β-approximation algorithm. We prove that the other three decision problems are NP-hard even for β ≥ 4 and give polynomial algorithms with approximation bounds of β or better in each case. Furthermore, generalizing the definition of quasi-transitivity introduced by Ghouilà -Houri, we define β-quasi-transitivity and prove a more generalized version their theorem relating quasi-transitive orientation and transitive orientation.Item Open Access A recursive least squares training approach for convolutional neural networks(Colorado State University. Libraries, 2022) Yang, Yifan, author; Azimi-Sadjadi, Mahmood, advisor; Pezeshki, Ali, committee member; Oprea, Iuliana, committee memberThis thesis aims to come up with a fast method to train convolutional neural networks (CNNs) using the application of the recursive least squares (RLS) algorithm in conjunction with the back-propagation learning. In the training phase, the mean squared error (MSE) between the actual and desired outputs is iteratively minimized. The recursive updating equations for CNNs are derived via the back-propagation method and normal equations. This method does not need the choice of a learning rate and hence does not suffer from speed-accuracy trade-off. Additionally, it is much faster than the conventional gradient-based methods in a sense that it needs less epochs to converge. The learning curves of the proposed method together with those of the standard gradient-based methods using the same CNN structure are generated and compared on the MNIST handwritten digits and Fashion-MNIST clothes databases. The simulation results show that the proposed RLS-based training method requires only one epoch to meet the error goal during the training phase while offering comparable accuracy on the testing data sets.Item Open Access A theoretical and numerical investigation of warm-phase microphysical processes(Colorado State University. Libraries, 2015) Igel, Adele, author; van den Heever, Susan, advisor; Kreidenweis, Sonia, committee member; Rutledge, Steven, committee member; Oprea, Iuliana, committee memberSeveral studies examining microphysical processes are conducted with an emphasis on further understanding warm-phase processes, particularly condensation. In general, these studies progress from simple to complex representations of microphysical processes in models. In the first study, a theoretical, analytical expression for the condensational invigoration, that is the invigoration in the warm-phase of a cloud due to changes in the condensation rate, of a polluted, cloudy parcel of air relative to a clean, cloudy parcel of air is developed. The expression is shown to perform well compared to parcel model simulations, and to accurately predict the invigoration to within 30% or less. The expression is then used to explore the sensitivity of invigoration to a range of initial conditions. It is found that the invigoration, in terms of added kinetic energy, is more sensitive to the cloud base temperature than to the initial buoyancy of the parcels. Changes in vertical velocity between clean and polluted parcels of up to 4.5 m s−1 at 1 km above cloud base are theoretically possible, and the difference in vertical velocity decreases when the initial vertical velocity of either parcel is large. These theoretical predictions are expected to represent an upper limit to the magnitude of condensational invigoration and should be applicable to both shallow cumulus clouds as well as the warm phase of deep convection. In the second study, the focus shifts to the comparison of the representation of microphysical processes in single- and double-moment microphysics schemes. Single-moment microphysics schemes have long enjoyed popularity for their simplicity and efficiency. However, it is argued that the assumptions inherent in these parameterizations can induce large errors in the proper representation of clouds and their feedbacks to the atmosphere. For example, precipitation is shown to increase by 200% through changes to fixed parameters in a single-moment scheme and low cloud fraction in the RCE simulations drops from ~15% in double-moment simulations to ~2% in single-moment simulations. This study adds to the large body of work that has shown that double-moment schemes generally outperform single-moment schemes. It is recommended that future studies, especially those employing cloud-resolving models, strongly consider moving to the exclusive use of multi-moment microphysics schemes. An alternative to multi-moment schemes is a bin scheme. In the third study, the condensation rates predicted by bin and bulk microphysics schemes in the same model framework are compared in a novel way using simulations of non-precipitating shallow cumulus clouds. The bulk scheme generally predicts lower condensation rates than does the bin scheme when the saturation ratio and the integrated diameter of the droplet distribution are identical. Despite other fundamental disparities between the bin and bulk condensation parameterizations, the differences in condensation rates are predominantly explained by accounting for the width of the cloud droplet size distributions simulated by the bin scheme which can alter the rates by 50% or more in some cases. The simulations are used again in the fourth study in order to further investigate the dependency of condensation and evaporation rates to the shape parameter and how this dependency impacts the microphysical and optical properties of clouds. The double-moment bulk microphysics simulations reveal that the shape parameter can lead to large changes in the average condensation rates, particularly in evaporating regions of the cloud where feedbacks between evaporation and the depletion of individual droplets magnify the dependency of the evaporation rate on the shape parameter. As a result the average droplet number concentration increases as the shape parameter increases, but changes to the cloud water content are small. Taken together, these impacts lead to a decrease in the average cloud albedo. Finally, the simulations indicate that the value of the shape parameter in subsaturated cloudy air is more important than the value in supersaturated cloudy air, and that a constant shape parameter may not be a poor assumption for simulations of non-precipitating shallow cumulus clouds.Item Open Access Accelerated adaptive numerical methods for computational electromagnetics: enhancing goal-oriented approaches to error estimation, refinement, and uncertainty quantification(Colorado State University. Libraries, 2022) Harmon, Jake J., author; Notaroš, Branislav M., advisor; Estep, Don, committee member; Ilić, Milan, committee member; Oprea, Iuliana, committee memberThis dissertation develops strategies to enhance adaptive numerical methods for partial differential equation (PDE) and integral equation (IE) problems in computational electromagnetics (CEM). Through a goal-oriented emphasis, with a particular focus on scattered field and radar cross-section (RCS) quantities of interest (QoIs), we study automated acceleration techniques for the analysis of scattering targets. A primary contribution of this work, we propose an error prediction refinement strategy, which, in addition to providing rigorous global error estimates (as opposed to just error indicators), promotes equilibration of local error contribution estimates, a key requirement of efficient discretizations. Furthermore, we pursue consistent exponential convergence of the QoIs with respect to the number of degrees of freedom without prior knowledge of the solution behavior (whether smooth or otherwise) or the sensitivity of the QoIs to the discretization quality. These developments, in addition to supporting significant reductions in computation time for high accuracy, offer enhanced confidence in simulation results, promoting, therefore, higher quality decision making and design. Moreover, aside from the need for rigorous error estimation and fully automated discretization error control, practical simulations necessitate a study of uncertain effects arising, for example, from manufacturing tolerances. Therefore, by repeating the emphasis on the QoI, we leverage the computational efforts expended in error estimation and adaptive refinement to relate perturbations in the model to perturbations of the QoI in the context of applications in CEM. This combined approach permits simultaneous control of deterministic discretization error and its effect on the QoI as well as a study of the QoI behavior in a statistical sense. A substantial implementation infrastructure undergirds the developments pursued in this dissertation. In particular, we develop an approach to conducting flexible refinements capable of tuning both local spatial resolution ($h$-refinements) and enriching function spaces ($p$-refinements) for vector finite elements. Based on a superposition of refinements (as opposed to traditional refinement-by-replacement), the presented $hp$-refinement paradigm drastically reduces implementation overhead, permits straightforward representation of meshes of arbitrary irregularity, and retains the potential for theoretically optimal rates of convergence even in the presence of singularities. These developments amplify the utility of high-quality error estimation and adaptive refinement mechanisms by facilitating the insertion of new degrees of freedom with surgical precision in CEM applications. We apply the proposed methodologies to a strong set of canonical targets and benchmarks in electromagnetic scattering and the Maxwell eigenvalue problem. While directed at time-harmonic excitations, the proposed methods readily apply to other problems and applications in applied mathematics.Item Open Access Characterizing the self-motion manifolds of redundant robots of arbitrary kinematic structures(Colorado State University. Libraries, 2022) Almarkhi, Ahmad A., author; Maciejewski, Anthony A., advisor; Chong, Edwin, committee member; Oprea, Iuliana, committee member; Zhao, Jianguo, committee memberRobot fault tolerance measures can be classified into two categories: 1) Local measures that are based on the singular value decomposition (SVD) of the robot Jacobian, and 2) Global measures that are suitable to quantify the fault tolerance more effectively in pick-and-place applications. One can use the size of the self-motion manifold of a robot as a global fault-tolerance measure. The size of the self-motion manifold at a certain end-effector location can be simply the sum of the range of the joint angles of a robot at that location. This work employs the fact that the largest self-motion manifolds occur due to merging two (or more) previously disjoint manifolds. The connection of previously disjoint manifolds occur in special configurations in the joint space called singularities. Singularities (singular configurations) occur when two or more of the robot joint axes become aligned and are linearly dependent. A significant amount of research has been performed on identifying the robot singularities but was all based on symbolically solving for when the robot Jacobian is not of full rank. In this work, an algorithm was proposed that is based on the gradient of the singular values of the robot Jacobian. This algorithm is not limited to any Degree of Freedom (DoF) nor any specific robot kinematic structure and any rank of singularity. Based on the robot singularities, one can search for the largest self-motion manifold near robot singularities. The measure of the size of the self-motion manifold was chosen to eliminate the effect of the self-motion manifold's topology and dimension. Because the SVD at singularities is indistinct, one can employ Givens rotations to define the physically meaningful singular directions, i.e., the directions where the robot is not able to move. This approach has been extensively implemented on a 4-DoF robot, different 7-DoF robot, and an 8-DoF robot. The global fault-tolerance measure might be further optimized by changing the kinematic structure of a robot. This may allow one to determine a globally fault-tolerant robot, i.e., a robot with 2π range for all of its joint angles at certain end-effector location, i.e., a location that is the most suitable for pick-and-place tasks.Item Open Access Cooperative control of mobile sensor platforms in dynamic environments(Colorado State University. Libraries, 2014) Ragi, Shankarachary, author; Chong, Edwin K. P., advisor; Krapf, Diego, committee member; Luo, J. Rockey, committee member; Oprea, Iuliana, committee memberWe develop guidance algorithms to control mobile sensor platforms, for both centralized and decentralized settings, in dynamic environments for various applications. More precisely, we develop control algorithms for the following mobile sensor platforms: unmanned aerial vehicles (UAVs) with on-board sensors for multitarget tracking, autonomous amphibious vehicles for flood-rescue operations, and directional sensors (e.g., surveillance cameras) for maximizing an information-gain-based objective function. The following is a brief description of each of the above-mentioned guidance control algorithms. We develop both centralized and decentralized control algorithms for UAVs based on the theories of partially observable Markov decision process (POMDP) and decentralized POMDP (Dec-POMDP) respectively. Both POMDPs and Dec-POMDPs are intractable to solve exactly; therefore we adopt an approximation method called nominal belief-state optimization (NBO) to solve (approximately) the control problems posed as a POMDP or a Dec-POMDP. We then address an amphibious vehicle guidance problem for a flood rescue application. Here, the goal is to control multiple autonomous amphibious vehicles while minimizing the average rescue time of multiple human targets stranded in a flood situation. We again pose this problem as a POMDP, and extend the above-mentioned NBO approximation method to solve the guidance problem. In the final phase, we study the problem of controlling multiple 2-D directional sensors while maximizing an objective function based on the information gain corresponding to multiple target locations. This problem is found to be a combinatorial optimization problem, so we develop heuristic methods to solve the problem approximately, and provide analytical results on performance guarantees. We then improve the performance of our heuristics by applying an approximate dynamic programming approach called rollout.Item Open Access Design and control of kinematically redundant robots for maximizing failure-tolerant workspaces(Colorado State University. Libraries, 2021) Bader, Ashraf M., author; Maciejewski, Anthony A., advisor; Oprea, Iuliana, committee member; Pezeshki, Ali, committee member; Young, Peter, committee memberKinematically redundant robots have extra degrees of freedom so that they can tolerate a joint failure and still complete an assigned task. Previous work has defined the "failure-tolerant workspace" as the workspace that is guaranteed to be reachable both before and after an arbitrary locked-joint failure. One mechanism for maximizing this workspace is to employ optimal artificial joint limits prior to a failure. This dissertation presents two techniques for determining these optimal artificial joint limits. The first technique is based on the gradient ascent method. The proposed technique is able to deal with the discontinuities of the gradient that are due to changes in the boundaries of the failure tolerant workspace. This technique is illustrated using two examples of three degree-of-freedom planar serial robots. The first example is an equal link length robot where the optimal artificial joint limits are computed exactly. In the second example, both the link lengths and artificial joint limits are determined, resulting in a robot design that has more than twice the failure-tolerant area of previously published locally optimal designs. The second technique presented in this dissertation is a novel hybrid technique for estimating the failure-tolerant workspace size for robots of arbitrary kinematic structure and any number of degrees of freedom performing tasks in a 6D workspace. The method presented combines an algorithm for computing self-motion manifold ranges to estimate workspace envelopes and Monte-Carlo integration to estimate orientation volumes to create a computationally efficient algorithm. This algorithm is then combined with the coordinate ascent optimization technique to determine optimal artificial joint limits that maximize the size of the failure-tolerant workspace of a given robot. This approach is illustrated on multiple examples of robots that perform tasks in 3D planar and 6D spatial workspaces.Item Open Access Diagnosing the angular momentum fluxes that drive the quasi-biennial oscillation(Colorado State University. Libraries, 2023) Hughes, Ann-Casey, author; Randall, David A., advisor; Hurrell, James, committee member; Oprea, Iuliana, committee memberThe quasi-biennial oscillation (QBO) is a descending pattern of alternating easterly and westerly equatorial stratospheric winds that is produced by the upward transport of momentum in multiple types of atmospheric waves. The discovery of the QBO and its role in the global circulation are discussed. The angular momentum budget of the QBO is analyzed using ERA-Interim isentropic analyses. We explain the benefits of isentropic coordinates and angular momentum as tools for analyzing atmospheric motion. We diagnose vertical motion utilizing continuity, allowing direct computation of the angular momentum fluxes due to vertical motion. The angular momentum fluxes due to unresolved convectively generated gravity waves are computed as a residual. These results are discussed with the goal of improving the representation of sub-grid scale motions in numerical models. We also discuss these results within the context of the reliability of reanalysis datasets and the downsides to treating reanalysis data as observations. We also revisit and discuss the seasonal dependence of the QBO transition.Item Open Access Differential equation models of wildfire suppression allocation(Colorado State University. Libraries, 2018) Masarie, Alex Taylor, author; Wei, Yu, advisor; Oprea, Iuliana, committee member; Thompson, Matt, committee member; Belval, Erin, committee member(CHAPTER 1) Current policy calls for efficient and effective wildfire response, which requires an understanding of the system's complexity. Data visualization often provides key insight to initiate any normative modeling effort to reveal best practices when implementing the policy. This chapter outlines a procedure to make MATLAB structures from a resource tracking database. We prepared a wildfire suppression allocation database, built an animation and graphical user interface, and initiated our investigation of differential equations using GIS maps and phase plane as descriptive aides. (CHAPTER 2) Efficient and effective wildland fire response requires interregional coordination of suppression resources. We developed a mathematical model to examine how scarce resources are shared. This chapter outlines how we collected and processed the data, set up the model, and applied both to identify best-fit parameters. We interpret model outputs on interregional test cases that reflect the difficult tradeoffs in this resource allocation problem. By regressing a linear system of ordinary differential equations with GIS-data for demand predictors like suppression resource use, ongoing fire activity, fire weather metrics, accessibility, and population density onto pre-smoothed Resource Ordering Status System (ROSS) wildfire personnel and equipment requests, we fit a national scale regression. We interpret these parameters, report additional statistical properties, and indicate how these findings might be interpreted for personnel and equipment sharing by examining test cases for national, central/southern Rockies, and California interregional sharing. Abrupt switching behavior across medium and high alert levels was found in test cases for national, central/southern Rockies, and California interregional sharing. Workloads are expected to increase over time as well. (CHAPTER 3) Accumulation of burnable forest fuels is changing natural wildfire regimes. Recent megafires are an unintended consequence. Our capability to suppress unwanted fires stems from a complex national sharing process in which specialized firefighting resources mobilize around the United States. This work elaborated a coupled system of PDE equations and tested them on an archive of risk and allocation data from 2011-2016. This chapter poses a consistent math model for wildfire suppression management that explains how spatiotemporal variation in fire risk impacts allocation. Analogies between the seasonal flow of fire suppression demand potential and dynamics of physical flows are outlined for advection, diffusion, reaction, rotation, and feedback. To orient these mathematical methods in the context of resource allocation, we present multi-fire management examples varying in scope from local demand interactions on the Holloway/Barry Point/Rush Fires in 2012 to large perturbations in national allocation. We prototype objective functions.Item Open Access Electromagnetic model subdivision and iterative solvers for surface and volume double higher order numerical methods and applications(Colorado State University. Libraries, 2019) Manić, Sanja B., author; Notaroš, Branislav, advisor; Reising, Steven, committee member; Chandrasekar, V., committee member; Oprea, Iuliana, committee member; Ilić, Milan, committee memberHigher order methods have been established in the numerical analysis of electromagnetic structures decreasing the number of unknowns compared to the low order discretization. In order to decrease memory requirements even further, model subdivision in the computational analysis of electrically large structures has been used. The technique is based on clustering elements and solving/approximating subsystems separately, and it is often implemented in conjunction with iterative solvers. This thesis addresses unique theoretical and implementation details specific to model subdivision of the structures discretized by the Double Higher Order (DHO) elements analyzed by i) Finite Element Method - Mode Matching (FEM-MM) technique for closed-region (waveguide) structures and ii) Surface Integral Equation Method of Moments (SIE-MoM) in combination with (Multi-Level) Fast Multipole Method for open-region bodies. Besides standard application in decreasing the model size, DHO FEM-MM is applied to modeling communication system in tunnels by means of Standard Impedance Boundary Condition (SIBC), and excellent agreement is achieved with measurements performed in Massif Central tunnel. To increase accuracy of the SIE-MoM computation, novel method for numerical evaluation of the 2-D surface integrals in MoM matrix entries has been improved to achieve better accuracy than traditional method. To demonstrate its efficiency and practicality, SIE-MoM technique is applied to analysis of the rain event containing significant percentage of the oscillating drops recorded by 2D video disdrometer. An excellent agreement with previously-obtained radar measurements has been established providing the benefits of accurately modeling precipitation particles.Item Open Access Fast and accurate double-higher-order method of moments accelerated by Diakoptic Domain Decomposition and memory efficient parallelization for high performance computing systems(Colorado State University. Libraries, 2015) Manić, Ana, author; Notaros, Branislav, advisor; Reising, Steven, committee member; Oprea, Iuliana, committee member; Roy, Sourajeet, committee member; Ilić, Milan, committee memberTo view the abstract, please see the full text of the document.Item Open Access Higher order volume/surface integral equation modeling of antennas and scatterers using diakoptics and method of moments(Colorado State University. Libraries, 2015) Chobanyan, Elene, author; Notaros, Branislav M., advisor; Reising, Steven, committee member; Oprea, Iuliana, committee member; Chandrasekar, V., committee member; Pezeshki, Ali, committee memberThe principal objective of this dissertation is to develop, test, and optimize accurate, efficient, and robust computational methodology and tools for modeling of general antennas and scatterers based on solutions of electromagnetic integral equation formulations using the method of moments (MoM) and diakoptics. The approaches and implementations include the volume integral equation (VIE) method and its hybridization with the surface integral equation (SIE) method, in two ways. The first way combines the VIE method for dielectric parts and the SIE method for metallic parts of the structure. The second way performs subdivision of the entire structure into SIE domains of different constant permittivities, while modeling the inhomogeneity within each domain by the VIE method and employing different Green's functions, with describing the inhomogeneity within each domain in terms of a perturbation with respect to the background permittivity. The first approach is very suitable for analysis of composite wire-plate-dielectric radiation/scattering structures. The second approach provides a particularly efficient solution to problems involving inhomogineities embedded within high-contrast homogeneous dielectric scatterers. The efficiency of computation is enhanced by applying the diakoptic domain decomposition. In the VIE-SIE diakoptic method, the interior diakoptic subsystems containing inhomogeneous dielectric materials are analyzed completely independently applying the VIE-SIE MoM solver, and the solution to the original problem is obtained from linear relations between electric and magnetic surface-current diakoptic coefficients on diakoptic surfaces, written in the form of matrices. The techniques implement Lagrange-type generalized curved parametric hexahedral MoM-VIE volume elements and quadrilateral MoM-SIE and diakoptic patches of arbitrary geometrical-mapping orders, and divergence-conforming hierarchical polynomial vector basis functions of arbitrary current expansion orders. The hexahedra can be filled with inhomogeneous dielectric materials with continuous spatial variations of the permittivity described by Lagrange interpolation polynomials of arbitrary material-representation orders. Numerical computation is further accelerated by MPI parallelization to enable analysis of large electromagnetic problems.Item Open Access Hopf bifurcation in anisotropic reaction diffusion systems posed in large rectangles(Colorado State University. Libraries, 2010) Olson, Travis Andrew, author; Dangelmayr, G. (Gerhard), 1951-, advisor; Eykholt, Richard Eric, 1956-, committee member; Kirby, Michael, 1961-, committee member; Oprea, Iuliana, committee memberThe oscillatory instability (Hopf bifurcation) for anisotropic reaction diffusion equations posed in large (but finite) rectangles is investigated. The work pursued in this dissertation extends previous studies for infinitely extended 2D systems to include finite-size effects. For the case considered, the solution of the reaction diffusion system is represented in terms of slowly modulated complex amplitudes of four wave-trains propagating in four oblique directions. While for the infinitely extended system the modulating amplitudes are independent dynamical variables, the finite size of the domain leads to relations between them induced by wave reflections at the boundaries. This leads to a single amplitude equation for a doubly periodic function that captures all four envelopes in different regions of its fundamental domain. The amplitude equation is derived by matching an asymptotic bulk solution to an asymptotic boundary layer solution. While for the corresponding infinitely extended system no further parameters generically remain in the amplitude (envelope) equations above the onset value of the control parameter, the finite-size amplitude equation retains a dependence on a rescaled version of this parameter. Numerical simulations show that the dynamics of the bounded system shows different behavior at onset in comparison to the unbounded system, and the complexity of the solutions significantly increases when the rescaled control parameter is increased. As an application of the technique developed, an anisotropic Activator-Inhibitor model with higher order diffusion is studied, and parameter values of the amplitude equations are calculated for several parameter sets of the model equations.Item Open Access Kinematic design of redundant robotic manipulators that are optimally fault tolerant(Colorado State University. Libraries, 2014) Ben-Gharbia, Khaled M., author; Maciejewski, Anthony A., advisor; Chong, Edwin K. P., committee member; Roberts, Rodney G., committee member; Oprea, Iuliana, committee memberIt is common practice to design a robot's kinematics from the desired properties that are locally specified by a manipulator Jacobian. Conversely, one can determine a manipulator that possesses certain desirable kinematic properties by specifying the required Jacobian. For the case of optimality with respect to fault tolerance, one common definition is that the post-failure Jacobian possesses the largest possible minimum singular value over all possible locked-joint failures. This work considers Jacobians that have been designed to be optimally fault tolerant for 3R and 4R planar manipulators. It also considers 4R spatial positioning manipulators and 7R spatial manipulators. It has been shown in each case that multiple different physical robot kinematic designs can be obtained from (essentially) a single Jacobian that has desirable fault tolerant properties. In the first part of this dissertation, two planar examples, one that is optimal to a single joint failure and the second that is optimal to two joint failures, are analyzed. A mathematical analysis that describes the number of possible planar robot designs for optimally fault-tolerant Jacobians is presented. In the second part, the large family of physical spatial positioning manipulators that can achieve an optimally failure tolerant configuration are parameterized and categorized. The different categories of manipulator designs are then evaluated in terms of their global kinematic properties, with an emphasis on failure tolerance. Several manipulators with a range of desirable kinematic properties are presented and analyzed. In the third part, 7R manipulators that are optimized for fault tolerance for fully general spatial motion are discussed. Two approaches are presented for identifying a physically feasible 7R optimally fault tolerant Jacobian. A technique for calculating both reachable and fault tolerant six-dimensional workspace volumes is presented. Different manipulators are analyzed and compared. In both the planar and spatial cases, the analyses show that there are large variabilities in the global kinematic properties of these designs, despite being generated from the same Jacobian. One can select from these designs to optimize additional application-specific performance criteria.Item Open Access Mechanism-enabled population balances and the effects of anisotropies in the complex Ginzburg-Landau equation(Colorado State University. Libraries, 2019) Handwerk, Derek, author; Shipman, Patrick, advisor; Dangelmayr, Gerhard, committee member; Oprea, Iuliana, committee member; Finke, Richard, committee memberThis paper considers two problems. The first is a chemical modeling problem which makes use of ordinary differential equations to discover a minimum mechanism capable of matching experimental data in various metal nanoparticle nucleation and growth systems. This research has led to the concept of mechanism-enabled population balance modeling (ME-PBM). This is defined as the use of experimentally established nucleation mechanisms of particle formation to create more rigorous population balance models. ME-PBM achieves the goal of connecting reliable experimental mechanisms with the understanding and control of particle-size distributions. The ME-PBM approach uncovered a new and important 3-step mechanism that provides the best fits to experimentally measured particle-size distributions (PSDs). The three steps of this mechanism are slow, continuous nucleation and two surface growth steps. The importance of the two growth steps is that large particles are allowed to grow more slowly than small particles. This finding of large grow more slowly than small is a paradigm-shift away from the notion of needing nucleation to stop, such as in LaMer burst nucleation, in order to achieve narrow PSDs. The second is a study of the effects of anisotropy on the dynamics of spatially extended systems through the use of the anisotropic Ginzburg-Landau equation (ACGLE) and its associated phase diffusion equations. The anisotropy leads to different types of solutions not seen in the isotropic equation, due to the ability of waves to simultaneously be stable and unstable, including transient spiral defects together with phase chaotic ripples. We create a phase diagram for initial conditions representing both the longwave k = 0 case, and for wavevectors near the circle |k| = μ using the average L² energy.Item Open Access Optimal higher order modeling methodology based on method of moments and finite element method for electromagnetics(Colorado State University. Libraries, 2011) Klopf, Eve Marian, author; Notaroš, Branislav M., advisor; Chandrasekar, V., committee member; Reising, Steven C., committee member; Oprea, Iuliana, committee memberGeneral guidelines and quantitative recipes for adoptions of optimal higher order parameters for computational electromagnetics (CEM) modeling using the method of moments and the finite element method are established and validated, based on an exhaustive series of numerical experiments and comprehensive case studies on higher order hierarchical CEM models of metallic and dielectric scatterers. The modeling parameters considered are: electrical dimensions of elements (subdivisions) in the model (h-refinement), polynomial orders of basis and testing functions (p-refinement), orders of Gauss-Legendre integration formulas (numbers of integration points - integration accuracy), and geometrical orders of elements (orders of Lagrange-type curvature) in the model. The goal of the study, which is the first such study of higher order parameters in CEM, is to reduce the dilemmas and uncertainties associated with the great modeling flexibility of higher order elements, basis and testing functions, and integration procedures (this flexibility is the principal advantage but also the greatest shortcoming of the higher order CEM), and to ease and facilitate the decisions to be made on how to actually use them, by both CEM developers and practitioners. The ultimate goal is to close the large gap between the rising academic interest in higher order CEM, which evidently shows great numerical potential, and its actual usefulness and application to electromagnetics research and engineering applications.Item Open Access Optimal path planning for detection and classification of underwater targets using sonar(Colorado State University. Libraries, 2021) Robbiano, Christopher P., author; Chong, Edwin K. P., advisor; Azimi-Sadjadi, Mahmood R., advisor; Pezeshki, Ali, committee member; Oprea, Iuliana, committee memberThe work presented in this dissertation focuses on choosing an optimal path for performing sequential detection and classification state estimation to identify potential underwater targets using sonar imagery. The detection state estimation falls under the occupancy grid framework, modeling the relationship between occupancy state of grid cells and sensor measurements, and allows for the consideration of statistical dependence between the occupancy state of each grid cell in the map. This is in direct contrast to the classical formulations of occupancy grid frameworks, in which the occupancy state of each grid cell is considered statistically independent. The new method provides more accurate estimates, and occupancy grids estimated with this method typically converge with fewer measurements. The classification state estimation utilises a Dirichlet-Categorical model and a one-step classifier to perform efficient updating of the classification state estimate for each grid cell. To show the performance capabilities of the developed sequential state estimation methods, they are applied to sonar systems in littoral areas in which targets lay on the seafloor, could be proud, partially or fully buried. Additionally, a new approach to the active perception problem, which seeks to select a series of sensing actions that provide the maximal amount of information to the system, is developed. This new approach leverages the aforementioned sequential state estimation techniques to develop a set of information-theoretic cost functions that can be used for optimal sensing action selection. A path planning cost function is developed, defined as the mutual information between the aforementioned state variables before and after a measurement. The cost function is expressed in closed form by considering the prior and posterior distributions of the state variables. Choice of the optimal sensing actions is performed by modeling the path planning as a Markov decision problem, and solving it with the rollout algorithm. This work, supported by the Office of Naval Research (ONR), is intended to develop a suite of interactive sensing algorithms to autonomously command an autonomous underwater vehicle (AUV) for the task of detection and classification of underwater mines, while choosing an optimal navigation route that increases the quality of the detection and classification state estimates.Item Open Access Reducing off-chip memory accesses of wavefront parallel programs in Graphics Processing Units(Colorado State University. Libraries, 2014) Ranasinghe, Waruna, author; Rajopadhye, Sanjay, advisor; Bohm, Wim, committee member; Oprea, Iuliana, committee memberThe power wall is one of the major barriers that stands on the way to exascale computing. To break the power wall, overall system power/energy must be reduced, without affecting the performance. We can decrease energy consumption by designing power efficient hardware and/or software. In this thesis, we present a software approach to lower energy consumption of programs targeted for Graphics Processing Units (GPUs). The main idea is to reduce energy consumption by minimizing the amount of off-chip (global) memory accesses. Off-chip memory accesses can be minimized by improving the last level (L2) cache hits. A wavefront is a set of data/tiles that can be processed concurrently. A kernel is a function that get executed in GPU. We propose a novel approach to implement wavefront parallel programs on GPUs. Instead of using one kernel call per wavefront like in the traditional implementation, we use one kernel call for the whole program and organize the order of computations in such a way that L2 cache reuse is achieved. A strip of wavefronts (or a pass) is a collection of partial wavefronts. We exploit the non-preemptive behavior of the thread block scheduler to process a strip of wavefronts (i.e., a pass) instead of processing a complete wavefront at a time. The data transfered by a partial wavefront in a pass is small enough to fit in L2 cache, so that, successive partial wavefronts in the pass reuse the data in L2 cache. Hence the number of off-chip memory accesses is significantly pruned. We also introduce a technique to communicate and synchronize between two thread blocks without limiting the number of thread blocks per kernel or SM. This technique is used to maintain the order of wavefronts. We have analytically shown and experimentally validated the amount of reduction in off-chip memory accesses in our approach. The off-chip memory reads and writes are decreased by a factor of 45 and 3 respectively. We have shown that if GPUs incorporate L2 cache with write-back cache write policy, then off-chip memory writes also get reduced by a factor of 45. Our approach provides 98% and 74% L2 cache read hits and total cache hits respectively and the traditional approach reports only 2% and 1% respectively.Item Open Access Transport-radiation feedbacks of ozone in the tropical tropopause layer(Colorado State University. Libraries, 2017) Charlesworth, Edward, author; Birner, Thomas, advisor; Ravishankara, A. R., committee member; Oprea, Iuliana, committee memberThe tropical tropopause layer (TTL) is a region in the atmosphere that shows an interesting combination of tropospheric and stratospheric characteristics over the extent of several kilometers. For example, the TTL shows both convectively-driven tropospheric dynamics and the beginning of the mechanically-driven Brewer-Dobson circulation. The TTL is also important for climate due to its role as the gateway for most air that enters the stratosphere. In this work, a single-column model is used to investigate why a tropical tropopause layer of the observed vertical extent exists. This is done through computations of radiative convective equilibrium temperatures and interactive photochemical equilibrium ozone concentrations. The model uses only a basic simulation of ozone chemistry, convection, and stratospheric upwelling, but the results show that such a simplified expression of critical processes can produce temperature and ozone profiles that are very similar to observations. It is found that vertical transport of ozone by the Brewer-Dobson circulation and its associated effects on radiative heating rates is of first-order importance in producing the observed temperature structure of the tropical tropopause layer, within this simple modeling context. Adiabatic cooling due to stratospheric upwelling is found to be equally important to generate the tropical tropopause layer. With these combined processes, it is suggested that the even the lowest upwelling velocities on the order of observed upwelling can produce a TTL. With regards to climate change through the strengthening Brewer-Dobson circulation, this model suggests that an increase in upwelling from 0.5 to 0.6 mm/s should cool the cold point tropopause by 3.5 K and loft it by half a kilometer.