Repository logo
 

Theses and Dissertations

Permanent URI for this collection

Browse

Recent Submissions

Now showing 1 - 20 of 180
  • ItemOpen Access
    Number of 4-cycles of the genus 2 superspecial isogeny graph
    (Colorado State University. Libraries, 2024) Sworski, Vladimir P., author; Pries, Rachel, advisor; Hulpke, Alexander, committee member; Rajopadhye, Sanjay, committee member; Shoemaker, Mark, committee member
    The genus 2 superspecial degree-2 isogeny graph over a finite field of size p2 is a network graph whose vertices are constructed from genus 2 superspecial curves and whose edges are the degree 2 isogenies between them. Flynn and Ti discovered 4-cycles in the graph, which pose problems for applications in cryptography. Florit and Smith constructed an atlas which describes what the neighborhood of each vertex looks like. We wrote a program in SageMath that can calculate neighborhoods of these graphs for small primes. Much of our work is motivated by these computations. We examine the prevalence of 4-cycles in the graph and, motivated by work of Arpin, et al. in the genus 1 situation, in the subgraph called the spine. We calculate the number of 4-cycles that pass through vertices of 12 of the 14 kinds possible. This also resulted in constructing the neighborhood of all vertices two steps or fewer away for three special types of curves. We also establish conjectures about the number of vertices and cycles in small neighborhoods of the spine.
  • ItemOpen Access
    Persistence and simplicial metric thickenings
    (Colorado State University. Libraries, 2024) Moy, Michael, author; Adams, Henry, advisor; Patel, Amit, committee member; Peterson, Christopher, committee member; Ben-Hur, Asa, committee member
    This dissertation examines the theory of one-dimensional persistence with an emphasis on simplicial metric thickenings and studies two particular filtrations of simplicial metric thickenings in detail. It gives self-contained proofs of foundational results on one-parameter persistence modules of vector spaces, including interval decomposability, existence of persistence diagrams and barcodes, and the isometry theorem. These results are applied to prove the stability of persistent homology for sublevel set filtrations, simplicial complexes, and simplicial metric thickenings. The filtrations of simplicial metric thickenings studied in detail are the Vietoris–Rips and anti-Vietoris–Rips metric thickenings of the circle. The study of the Vietoris–Rips metric thickenings is motivated by persistent homology and its use in applied topology, and it builds on previous work on their simplicial complex counterparts. On the other hand, the study of the anti-Vietoris–Rips metric thickenings is motivated by their connections to graph colorings. In both cases, the homotopy types of these spaces are shown to be odd-dimensional spheres, with dimensions depending on the scale parameters.
  • ItemOpen Access
    Relative oriented class groups of quadratic extensions
    (Colorado State University. Libraries, 2024) O'Connor, Kelly A., author; Pries, Rachel, advisor; Achter, Jeffrey, committee member; Shoemaker, Mark, committee member; Rugenstein, Maria, committee member
    In 2018 Zemková defined relative oriented class groups associated to quadratic extensions of number fields L/K, extending work of Bhargava concerning composition laws for binary quadratic forms over number fields of higher degree. This work generalized the classical correspondence between ideal classes of quadratic orders and classes of integral binary quadratic forms to any base number field of narrow class number 1. Zemková explicitly computed these relative oriented class groups for quadratic extensions of the rationals. We consider extended versions of this work and develop general strategies to compute relative oriented class groups for quadratic extensions of higher degree number fields by way of the action of Gal(K/Q) on the set of real embeddings of K. We also investigate the binary quadratic forms side of Zemková's bijection and determine conditions for representability of elements of K. Another project comprising work done jointly with Lian Duan, Ning Ma, and Xiyuan Wang is included in this thesis. Our project investigates a principal version of the Chebotarev density theorem, a famous theorem in algebraic number theory which describes the splitting of primes in number field extensions. We provide an overview of the formulation of the principal density and describe its connection to the splitting behavior of the Hilbert exact sequence.
  • ItemOpen Access
    Counting isogeny classes of Drinfeld modules over finite fields via Frobenius distributions
    (Colorado State University. Libraries, 2024) Bray, Amie M., author; Achter, Jeffrey, advisor; Gillespie, Maria, committee member; Hulpke, Alexander, committee member; Pallickara, Shrideep, committee member; Pries, Rachel, committee member
    Classically, the size of an isogeny class of an elliptic curve -- or more generally, a principally polarized abelian variety -- over a finite field is given by a suitable class number. Gekeler expressed the size of an isogeny class of an elliptic curve over a prime field in terms of a product over all primes of local density functions. These local density functions are what one might expect given a random matrix heuristic. In his proof, Gekeler shows that the product of these factors gives the size of an isogeny class by appealing to class numbers of imaginary quadratic orders. Achter, Altug, Garcia, and Gordon generalized Gekeler's product formula to higher dimensional abelian varieties over prime power fields without the calculation of class numbers. Their proof uses the formula of Langlands and Kottwitz that expresses the size of an isogeny class in terms of adelic orbital integrals. This dissertation focuses on the function field analog of the same problem. Due to Laumon, one can express the size of an isogeny class of Drinfeld modules over finite fields via adelic orbital integrals. Meanwhile, Gekeler proved a product formula for rank two Drinfeld modules using a similar argument to that for elliptic curves. We generalize Gekeler's formula to higher rank Drinfeld modules by the direct comparison of Gekeler-style density functions with orbital integrals
  • ItemOpen Access
    Compound-Gaussian-regularized inverse problems: theory, algorithms, and neural networks
    (Colorado State University. Libraries, 2024) Lyons, Carter, author; Cheney, Margaret, advisor; Raj, Raghu G., advisor; Azimi, Mahmood, committee member; King, Emily, committee member; Mueller, Jennifer, committee member
    Linear inverse problems are frequently encountered in a variety of applications including compressive sensing, radar, sonar, medical, and tomographic imaging. Model-based and data-driven methods are two prevalent classes of approaches used to solve linear inverse problems. Model-based methods incorporate certain assumptions, such as the image prior distribution, into an iterative estimation algorithm, often, as an example, solving a regularized least squares problem. Instead, data-driven methods learn the inverse reconstruction mapping directly by training a neural network structure on actual signal and signal measurement pairs. Alternatively, algorithm unrolling, a recent approach to inverse problems, combines model-based and data-driven methods through the implementation of an iterative estimation algorithm as a deep neural network (DNN). This approach offers a vehicle to embed domain-level and algorithmic insights into the design of neural networks such that the network layers are interpretable. The performance, in reconstructed signal quality, of unrolled DNNs often exceeds that of corresponding iterative algorithms and standard DNNs while doing so in a computationally efficient fashion. In this work, we leverage algorithm unrolling to combine a powerful statistical prior, the compound Gaussian (CG) prior, with the powerful representational ability of machine learning and DNN approaches. Specifically, first we construct a novel iterative CG-regularized least squares algorithm for signal reconstruction and provide a computational theory for this algorithm. Second, using algorithm unrolling, the newly developed CG-based least squares iterative algorithm is transformed into an original DNN in a manner to facilitate the learning of the optimization landscape geometry. Third, a generalization on the newly constructed CG regularized least squares iterative algorithm is developed, theoretically analyzed, and unrolled to yield a novel state-of-the-art DNN that provides a partial learning of the prior distribution constrained to the CG class of distributions. Fourth, techniques in statistical learning theory are employed for deriving original generalization error bounds on both unrolled DNNs to substantiate theoretical guarantees of each neural network when estimating signals from linear measurements after training. Finally, ample numerical experimentation is conducted for every new CG-based iterative and DNN approach proposed in this paper. Simulation results show our methods outperform previous state-of-the-art iterative signal estimation algorithms and deep-learning-based methods, especially with limited training datasets.
  • ItemOpen Access
    Properties of the reconstruction algorithm and associated scattering transform for admittivities in the plane
    (Colorado State University. Libraries, 2009) Von Herrmann, Alan, author; Mueller, Jennifer, advisor
    We consider the inverse admittivity problem in dimension two. The focus of this dissertation is to develop some properties of the scattering transform Sγ(k) with γ ϵ W1,p(Ω) and to develop properties of the exponentially growing solutions to the admittivity equation. We consider the case when the potential matrix is Hermitian and the definition of the potential matrix used by Francini [Inverse Problems, 16, 2000]. These exponentially growing solutions play a role in developing a reconstruction algorithm from the Dirichlet-to-Neumann map of γ. A boundary integral equation is derived relating the Dirichlet-to-Neumann map of γ to the exponentially growing solutions to the admittivity equation.
  • ItemOpen Access
    Radial basis functions for color conversion
    (Colorado State University. Libraries, 2008) Qiao, Yue, author; Kirby, Michael, advisor
    The most difficult and challenging task in color printing is to reduce costs while maintaining superior quality. This dissertation proposes significant enhancements to printer color conversion techniques including accurate nonlinear models that incorporate perceptual color difference metrics, lossless gray component replacement (GCR) transformations, optimized toner saving algorithms and numerical/perceptual based gamut mapping methods. Radial Basis Functions (RBFs) combined with the Lp norm approximation with emphasis on L1, L2, and L∞ were developed for color conversion. The exchange algorithm was employed in the L∞ and L1 approximations with RBFs that satisfy the Haar condition. Both the Barrodale and Phillips (BP) algorithm for solving the dual problem and the Bartels and Conn's (BC) algorithm for solving the primal were extended to multidimensional color conversion. A new approach for lossless GCR was achieved by finding one dimensional color manifolds in the CMIYK color space using multidimensional optimization techniques. We proposed objective functions for toner savings, cost savings, etc., with no quality degradation. The color conversion with the toner/ink limitation problem was solved via both L1 and L∞ approximation algorithms in the neutral and saturated color regions respectively. The L1 algorithm was a modified Barrodale and Roberts (BR) primal algorithm with an added constraint, while the L∞ algorithm was developed based on the BP dual algorithm which extended the three-stage algorithm to a four-stage algorithm. A novel gamut mapping algorithm was developed based on the numerical model guided by a perceptual color difference model. The direction of the gamut mapping is not fixed as in other methods. The algorithm transformed a connected out-of-gamut colors to connected colors around the boundary of the device gamut. The out-of-gamut colors in a small neighborhood vary continuously and smoothly. Our results indicated that the color conversion quality was significantly improved. The lossless GCR algorithm is accurate and efficient. Both the BP and BC algorithms for solving the toner/ink limitation are able to convert colors from CIELab to CMY with any given toner/ink limitation. We foresee this research will have significant impact on the color reproduction industry.
  • ItemUnknown
    Mathematical methods for fluid-solid interfaces: meandering streams and sand ripples
    (Colorado State University. Libraries, 2008) Mertens, Keith, author; Putkaradze, Vakhtang, advisor
    This thesis presents several mathematical methods for modeling free surfaces, interfaces, and fluid-solid interactions. This work is done in the context of two physical systems. In the first two sections, the focus will be to understand the the physics of streams flowing down inclined substrates. Models will be derived to investigate both steady state and dynamic meandering profiles. It will be shown that, through the right approximation techniques, many physical insights can be drawn about this system. These results include: a complete understanding of the steady states, transitions between steady states, mechanism of meandering, forces involved in meandering, and spectral scaling laws of long-time ensemble averaged meandering stream profiles. In the third section, the focus will shift to how one can model underlying physics when it becomes too complicated to address from first principles. Here, the power of symmetries and conservation laws are explored to derive an amplitude equation describing the interface between sand and water when the water is subjected to oscillatory flow. The thesis will then close by posing a novel way to study scaling laws with respect to parameters using Lie's prolongation algorithm. Through this work various tools will be combined from the fields of physics, engineering, applied and pure mathematics to develop approaches for reducing complex systems into tractable pieces which can be studied carefully.
  • ItemUnknown
    Large-scale computational analysis of National Animal Identification System mock data, including traceback and trace forward
    (Colorado State University. Libraries, 2008) Ladd, Joshua, author; Burns, Patrick J., advisor
    Cattle production is the single largest segment of U.S. agriculture. Animal disease, whether a single incident or a full-scale outbreak, can result in significantly restricted access to both foreign and domestic markets. Regaining consumer confidence is difficult. If a disease cannot be traced back to a common source, then only time can tell whether or not eradication and containment efforts have been successful. Simply "waiting it out" can result in long-term economic losses on a National scale especially when diseases which are prone to epizootic outbreaks or those with long incubation periods are involved. The United States Department of Agriculture (USDA) maintains that traceability is the key to protecting animal health and marketability: The National Animal Identification System (NAIS) is a voluntary disease traceability framework released by the USDA. Many of the efforts surrounding the development of the NAIS have encompassed the identification of livestock production and handling premises as well as individuals or herds of animals, whereas little effort has been directed toward the ultimate goal of animal traceback in 48 hours. In this dissertation, computational science is applied to the problem of animal disease traceability. In particular, a computational model is developed for the purpose of conducting large-scale traceability simulations. The model consists of two components; the first being a parallel, Monte Carlo discrete events simulator capable of generating large, NAIS-compliant, mock datasets representative of the processing requirements of actual NAIS data. The second component is a large-scale, parallel disease tracing algorithm that is mapped onto an SMP supercomputer where high-performance is achieved by adopting a hybrid parallel programming model that mixes a shared memory multi-threading model (OpenMP) with a distributed memory message passing model (MPI). The objectives of this dissertation are to characterize the computational requirements of the NAIS, identify computational platforms and programming paradigms well suited to this effort, and to identify and address computational performance bottlenecks associated with large-scale tracing algorithms.
  • ItemUnknown
    An adaptive algorithm for an elliptic optimization problem, and stochastic-deterministic coupling: a mathematical framework
    (Colorado State University. Libraries, 2008) Lee, Sheldon, author; Estep, Donald, advisor; Tavener, Simon, advisor
    This dissertation consists of two parts. In the first part, we study optimization of a quantity of interest of a solution of an elliptic problem, with respect to parameters in the data using a gradient search algorithm. We use the generalized Green's function as an efficient way to compute the gradient. We analyze the effect of numerical error on a gradient search, and develop an efficient way to control these errors using a posteriori error analysis. Specifically, we devise an adaptive algorithm to refine and unrefine the finite element mesh at each step in the descent search algorithm. We give basic examples and apply this technique to a model of a healing wound. In the second part, we construct a mathematical framework for coupling atomistic models with continuum models. We first study the case of coupling two deterministic diffusive regions with a common interface. We construct a fixed point map by repeatedly solving the problems, while passing the flux in one direction and the concentration in the other direction. We examine criteria for the fixed point iteration to converge, and offer remedies such as reversing the direction of the coupling, or relaxation, for the case it does not. We then study the one dimensional case where the particles undergo a random walk on a lattice, next to a continuum region. As the atomistic region is random, this technique yields a fixed point iteration of distributions. We run numerical tests to study the long term behavior of such an iteration, and compare the results with the deterministic case. We also discuss a probability transition matrix approach, in which we assume that the boundary conditions at each iterations follow a Markov chain.
  • ItemOpen Access
    Modeling spatio-temporal systems with skew radial basis functions: theory, algorithms and applications
    (Colorado State University. Libraries, 2008) Jamshidi, Arthur (Arta) Amir, author; Kirby, Michael, advisor
    The discovery of knowledge in large data sets can often be formulated as a problem in nonlinear function approximation. The inherent challenge in such an approach is that the data is often high dimensional, scattered and sparse. Given a limited number of exemplars one would like to construct models that can generalize to new regions or events. Additionally, underlying physical processes may not be stationary and the nature of the nonlinear relationships may evolve. Ideally, a good model would be adaptive and remain valid over extended regions in space and time. In this work we propose a new Radial Basis Function (RBF) algorithm for constructing nonlinear models from high-dimensional scattered data. The algorithm progresses iteratively adding a new function at each step to refine the model. The placement of the functions is driven by one or more statistical hypotheses tests that reveal geometric structure in the data when it fails. At each step the added function is fit to data contained in a spatio-temporally defined local region to determine the parameters, in particular, the scale of the local model. Unlike prior techniques for nonlinear function fitting over scattered data, the proposed method requires no ad hoc parameters and it behaves effectively like a black box. Thus, the number of basis functions required for an accurate fit is determined automatically by the algorithm. An extension of the algorithms to multivariate case, i.e., the dimension of the range of the mapping is greater or equal to two, is also carried out. This approach produces more parsimonious models by exploiting the correlation among the various range dimensions. The convergence properties of the algorithms are shown from different prospectives. To further enhance the order and conditioning of the models we introduce several new compactly supported RBFs for approximating functions in LP(Rd) via over-determined least squares. We also propose a skew-radial basis function expansion for the empirical model fitting problem to achieve more accuracy and lower model orders. This is accomplished by modulating or skewing, each RBF by an asymmetric shape function which increases the number of degrees of freedom available to fit the data. We show that if the original RBF interpolation problem is positive definite, then so is the skew-radial basis function when it is viewed as a bounded perturbation of the RBF. We illustrate the utility of the theoretic and algorithmic innovations via several applications including modeling data on manifolds, prediction of financial and chaotic time-series and prediction of the maximum wind intensity of a hurricane. In addition, the skew-radial basis functions are shown to provide good approximations to data with jumps. While the algorithms presented here are in the context of RBFs, in principle they can be employed with other methods for function approximation such as multi-layer perceptrons.
  • ItemOpen Access
    Automorphism towers of general linear groups
    (Colorado State University. Libraries, 2008) Jónsdóttir, Margrét Sóley, author; Hulpke, Alexander, advisor
    Let G0, be a group, G 1 be the automorphism group of G0, G2 the automorphism group of G1 etc. The sequence of these groups together with the natural homomorphisms πi,i+1 : Gi → Gi+1, which take each element to the inner automorphism it induces, is called the automorphism tower of G 0. If πi,i+1 is an isomorphism for some i then the automorphism tower of G is said to terminate. For a given group it is in general not easy to say whether its automorphism tower terminates. Wielandt showed in 1939 that if G is finite with a trivial center then the automorphism tower of G will terminate in a finite number of steps. Since then, some sporadic examples of automorphism towers of finite groups have been described but no general results have been proven. In this thesis we study automorphism towers of finite groups with a non-trivial center. We look at the two extremes: (1) Groups which are center-rich. (2) Groups which have a small but non-trivial center. We show that when looking for an infinite family of groups with terminating automorphism towers the first case is unfeasible. We then turn our attention to the latter case, specifically general linear groups of dimension at least two. In odd characteristic GL(2, q) is not a split extension of the center. The first thing we do is to calculate the automorphism group of GL(2, q) for odd prime powers q. We provide explicit generators and describe the structure of Aut(GL(2, q)) in terms of well-known groups. In this case, the first automorphism group in the tower is a subdirect product of two characteristic factors. This structure is propagated through the tower and we use it to reduce the problem to studying subgroups of automorphism groups of smaller groups. We then use this structure to compute examples of automorphism towers of GL(2, q).
  • ItemOpen Access
    A ratio ergodic theorem on Borel actions of Zd and Rd
    (Colorado State University. Libraries, 2009) Holt, Eric Norman, author; Rudolph, Daniel, advisor
    We prove a ratio ergodic theorem for free Borel actions of Zd and Rd on a standard Borel probability space. The proof employs an extension of the Besicovitch Covering Lemma, as well as a notion of coarse dimension that originates in an upcoming paper of Hochman. Due to possible singularity of the measure, we cannot use functional analytic arguments and therefore diffuse the measure onto the orbits of the action. This diffused measure is denoted μx, and our averages are of the form 1/μx(Bn) ∫ Bn f o T-v(x)dμx. A Følner condition on the orbits of the action is shown, which is the main tool used in the proof of the ergodic theorem. Also, an extension of a known example of divergence of a ratio average is presented for which the action is both conservative and free.
  • ItemOpen Access
    Characteristics of certain families of random graphs
    (Colorado State University. Libraries, 2009) Hampson, Christian Paul, author; Achter, Jeff, advisor
    Many random network models can be expressed as the product space of the probability space of the individual edges. In these cases, the model can be expressed using a matrix of the probabilities of each edge. I then analyze these models using their respective probability matrices. Degree distribution and the larger eigenvalues are among the attributes whose values can be bound by examining the same attributes of the probability matrix. I also bound the difference between the eigenvalues of the adjacency matrix of a member of a random graph model and the eigenvalues of the probability matrix for the model. In addition I find the neighborhood expansion properties for three separate edge-product models.
  • ItemOpen Access
    Ramsey regions and simplicial homology tables for graphs
    (Colorado State University. Libraries, 2008) Frederick, Christopher Austin, author; Peterson, Chris, advisor
    Ramsey Theory is the investigation of edge-colored graphs which force a monochromatic subgraph. We devise a way of breaking certain Ramsey Theory problems into "smaller" pieces so that information about Ramsey Theory can be gained without solving the entire problem, (which is often difficult to solve). Next the work with Ramsey Regions for graphs is translated into the language of hypergraphs. Theorems and techniques are reworked to fit appropriately into the setting of hypergraphs. The work of persistence complex on large data sets is examined in the setting of graphs. Various simplicial complexes can be assigned to a graph. For a given simplicial complex the persistence complex can be constructed, giving a highly detailed graph invariant. Connections between the graph and persistence complex are investigated.
  • ItemOpen Access
    Signal fraction analysis for subspace processing of high dimensional data
    (Colorado State University. Libraries, 2007) Emdad, Fatemeh, author; Kirby, Michael, advisor
    A general tool for computing subspaces that decomposes data into potentially useful features is proposed. The technique is called Signal Fraction Analysis (SFA). The row-energy and column-energy optimization problems for signal-to-signal ratios are investigated. A generalized singular value problem is presented. This setting is distinguished from the Singular Value Decomposition (SVD). Preprocessing mappings of the data is used in situations where domain specific knowledge is available as a guide. We suggest an optimization problem where these mapping functions may be adapted using a problem dependent objective function. These ideas are illustrated using Wavelet and Fourier filters applied to EEG data. A self-contained description of the motivating maximum noise fraction method is included and a procedure for estimating the covariance matrix of the noise is described. We extend SFA by introducing novel constraints and propose two new generalized SVD type problems for computing subspace representations. A connection between SFA and Canonical Correlation Analysis is maintained. We implement and investigate a nonlinear extension to SFA based on a kernel method, i.e., Kernel SFA. Moreover, a second algorithm that uses noise adjustment in the data domain prior to kernelization is suggested. We include a detailed derivation of the methodology using kernel principal component analysis as a prototype. These methods are compared using toy examples and the benefits of KSFA are illustrated. This work establishes the potential of a SFA beamforming technique via its merger with a wide band MC-CDMA system. The details of non-overlapping window adaptive realization of SFA are introduced. We discuss the relationship between the SFA and DOA estimation via MUSIC. A novel structure for wide band MC-CDMA systems that utilizes the benefits of path diversity (inherent in direct sequence CDMA) and frequency diversity (inherent in MC-CDMA systems) is introduced. Simulations were performed to study the impact of noise perturbations on the performance of SFA. Simulations confirm that SFA enhances the performance and separability of interfering users. KSFA is applied to the classification of EEG data arising in the Brain Computer Interface Problem. We use Fourier and Wavelet filters to generate signal fractions as well as differencing methods.
  • ItemOpen Access
    Toward a type B(n) geometric Littlewood-Richardson Rule
    (Colorado State University. Libraries, 2007) Davis, Diane E., author; Kley, Holger, advisor
    We conjecture a geometric Littlewood-Richardson Rule for the maximal orthogonal Grassmannian and make significant advances in the proof of this conjecture. We consider Schubert calculus in the presence of a nondegenerate symmetric bilinear form on an odd-dimensional vector space (the type Bn setting) and use degenerations to understand intersections of Schubert varieties in the odd orthogonal Grassmannian. We describe the degenerations using combinatorial objects called checker games. This work is closely related to Vakil's Geometric Littlewood-Richardson Rule (Annals of Mathematics, 164).
  • ItemOpen Access
    Numerical solutions of nonlinear systems derived from semilinear elliptic equations
    (Colorado State University. Libraries, 2007) Cruceanu, Stefan-Gicu, author; Allgower, Eugene, advisor; Tavener, Simon, advisor
    The existence and the number of solutions for N-dimensional nonlinear boundary value problems has been studied from a theoretical point of view, but there is no general result that states how many solutions such a problem has or even to determine the existence of a solution. Numerical approximation of all solutions (complex and real) of systems of polynomials can be performed using numerical continuation methods. In this thesis, we adapt numerical continuation methods to compute all solutions of finite difference discretizations of boundary value problems in 2-dimensions involving the Laplacian. Using a homotopy deformation, new solutions on finer meshes are obtained from solutions on coarser meshes. The issue that we have to deal with is that the number of the solutions of the complex polynomial systems grows with the number of mesh points of the discretization. Hence, the need of some filters becomes necessary in this process. We remark that in May 2005, E. Allgower, D. Bates, A. Sommese, and C. Wampler used in [1] a similar strategy for finding all the solutions of two-point boundary value problems in 1-dimension with polynomial nonlinearities on the right hand side. Using exclusion algorithms, we were able to handle general nonlinearities. When tracking solutions sets of complex polynomial systems an issue of bifurcation or near bifurcation of paths arises. One remedy for this is to use the gamma-trick introduced by Sommese and Wampler in [2]. In this thesis we show that bifurcations necessarily occur at turning points of paths and we use this fact to numerically handle the bifurcation, when mappings are analytic.
  • ItemOpen Access
    Classification on the Grassmannians: theory and applications
    (Colorado State University. Libraries, 2008) Chang, Jen-Mei, author; Kirby, Michael, advisor
    This dissertation consists of four parts. It introduces a novel geometric framework for the general classification problem and presents empirical results obtained from applying the proposed method on some popular classification problems. An analysis of the robustness of the method is provided using matrix perturbation theory, which in turn motivates an optimization problem to improve the robustness of the classifier. Lastly, we illustrate the use of compressed data representations based on Karcher mean.
  • ItemOpen Access
    Computational measure theoretic approach to inverse sensitivity analysis: methods and analysis
    (Colorado State University. Libraries, 2009) Butler, Troy Daniel, author; Estep, Donald, advisor
    We consider the inverse problem of quantifying the uncertainty of inputs to a finite dimensional map, e.g. determined implicitly by solution of a nonlinear system, given specified uncertainty in a linear functional of the output of the map. The uncertainty in the output functional might be suggested by experimental error or imposed as part of a sensitivity analysis. We describe this problem probabilistically, so that the uncertainty in the quantity of interest is represented by a random variable with a known distribution, and we assume that the map from the input space to the quantity of interest is smooth. We derive an efficient method for determining the unique solution to the problem of inverting through a many-to-one map by computing set-valued inverses of the input space which combines a forward sensitivity analysis with the Implicit Function Theorem. We then derive an efficient computational measure theoretic approach to further invert into the entire input space resulting in an approximate probability measure on the input space.