Repository logo

Theses and Dissertations

Permanent URI for this collection


Recent Submissions

Now showing 1 - 20 of 175
  • ItemOpen Access
    Properties of the reconstruction algorithm and associated scattering transform for admittivities in the plane
    (Colorado State University. Libraries, 2009) Von Herrmann, Alan, author; Mueller, Jennifer, advisor
    We consider the inverse admittivity problem in dimension two. The focus of this dissertation is to develop some properties of the scattering transform Sγ(k) with γ ϵ W1,p(Ω) and to develop properties of the exponentially growing solutions to the admittivity equation. We consider the case when the potential matrix is Hermitian and the definition of the potential matrix used by Francini [Inverse Problems, 16, 2000]. These exponentially growing solutions play a role in developing a reconstruction algorithm from the Dirichlet-to-Neumann map of γ. A boundary integral equation is derived relating the Dirichlet-to-Neumann map of γ to the exponentially growing solutions to the admittivity equation.
  • ItemOpen Access
    Radial basis functions for color conversion
    (Colorado State University. Libraries, 2008) Qiao, Yue, author; Kirby, Michael, advisor
    The most difficult and challenging task in color printing is to reduce costs while maintaining superior quality. This dissertation proposes significant enhancements to printer color conversion techniques including accurate nonlinear models that incorporate perceptual color difference metrics, lossless gray component replacement (GCR) transformations, optimized toner saving algorithms and numerical/perceptual based gamut mapping methods. Radial Basis Functions (RBFs) combined with the Lp norm approximation with emphasis on L1, L2, and L∞ were developed for color conversion. The exchange algorithm was employed in the L∞ and L1 approximations with RBFs that satisfy the Haar condition. Both the Barrodale and Phillips (BP) algorithm for solving the dual problem and the Bartels and Conn's (BC) algorithm for solving the primal were extended to multidimensional color conversion. A new approach for lossless GCR was achieved by finding one dimensional color manifolds in the CMIYK color space using multidimensional optimization techniques. We proposed objective functions for toner savings, cost savings, etc., with no quality degradation. The color conversion with the toner/ink limitation problem was solved via both L1 and L∞ approximation algorithms in the neutral and saturated color regions respectively. The L1 algorithm was a modified Barrodale and Roberts (BR) primal algorithm with an added constraint, while the L∞ algorithm was developed based on the BP dual algorithm which extended the three-stage algorithm to a four-stage algorithm. A novel gamut mapping algorithm was developed based on the numerical model guided by a perceptual color difference model. The direction of the gamut mapping is not fixed as in other methods. The algorithm transformed a connected out-of-gamut colors to connected colors around the boundary of the device gamut. The out-of-gamut colors in a small neighborhood vary continuously and smoothly. Our results indicated that the color conversion quality was significantly improved. The lossless GCR algorithm is accurate and efficient. Both the BP and BC algorithms for solving the toner/ink limitation are able to convert colors from CIELab to CMY with any given toner/ink limitation. We foresee this research will have significant impact on the color reproduction industry.
  • ItemOpen Access
    Mathematical methods for fluid-solid interfaces: meandering streams and sand ripples
    (Colorado State University. Libraries, 2008) Mertens, Keith, author; Putkaradze, Vakhtang, advisor
    This thesis presents several mathematical methods for modeling free surfaces, interfaces, and fluid-solid interactions. This work is done in the context of two physical systems. In the first two sections, the focus will be to understand the the physics of streams flowing down inclined substrates. Models will be derived to investigate both steady state and dynamic meandering profiles. It will be shown that, through the right approximation techniques, many physical insights can be drawn about this system. These results include: a complete understanding of the steady states, transitions between steady states, mechanism of meandering, forces involved in meandering, and spectral scaling laws of long-time ensemble averaged meandering stream profiles. In the third section, the focus will shift to how one can model underlying physics when it becomes too complicated to address from first principles. Here, the power of symmetries and conservation laws are explored to derive an amplitude equation describing the interface between sand and water when the water is subjected to oscillatory flow. The thesis will then close by posing a novel way to study scaling laws with respect to parameters using Lie's prolongation algorithm. Through this work various tools will be combined from the fields of physics, engineering, applied and pure mathematics to develop approaches for reducing complex systems into tractable pieces which can be studied carefully.
  • ItemOpen Access
    Large-scale computational analysis of National Animal Identification System mock data, including traceback and trace forward
    (Colorado State University. Libraries, 2008) Ladd, Joshua, author; Burns, Patrick J., advisor
    Cattle production is the single largest segment of U.S. agriculture. Animal disease, whether a single incident or a full-scale outbreak, can result in significantly restricted access to both foreign and domestic markets. Regaining consumer confidence is difficult. If a disease cannot be traced back to a common source, then only time can tell whether or not eradication and containment efforts have been successful. Simply "waiting it out" can result in long-term economic losses on a National scale especially when diseases which are prone to epizootic outbreaks or those with long incubation periods are involved. The United States Department of Agriculture (USDA) maintains that traceability is the key to protecting animal health and marketability: The National Animal Identification System (NAIS) is a voluntary disease traceability framework released by the USDA. Many of the efforts surrounding the development of the NAIS have encompassed the identification of livestock production and handling premises as well as individuals or herds of animals, whereas little effort has been directed toward the ultimate goal of animal traceback in 48 hours. In this dissertation, computational science is applied to the problem of animal disease traceability. In particular, a computational model is developed for the purpose of conducting large-scale traceability simulations. The model consists of two components; the first being a parallel, Monte Carlo discrete events simulator capable of generating large, NAIS-compliant, mock datasets representative of the processing requirements of actual NAIS data. The second component is a large-scale, parallel disease tracing algorithm that is mapped onto an SMP supercomputer where high-performance is achieved by adopting a hybrid parallel programming model that mixes a shared memory multi-threading model (OpenMP) with a distributed memory message passing model (MPI). The objectives of this dissertation are to characterize the computational requirements of the NAIS, identify computational platforms and programming paradigms well suited to this effort, and to identify and address computational performance bottlenecks associated with large-scale tracing algorithms.
  • ItemOpen Access
    An adaptive algorithm for an elliptic optimization problem, and stochastic-deterministic coupling: a mathematical framework
    (Colorado State University. Libraries, 2008) Lee, Sheldon, author; Estep, Donald, advisor; Tavener, Simon, advisor
    This dissertation consists of two parts. In the first part, we study optimization of a quantity of interest of a solution of an elliptic problem, with respect to parameters in the data using a gradient search algorithm. We use the generalized Green's function as an efficient way to compute the gradient. We analyze the effect of numerical error on a gradient search, and develop an efficient way to control these errors using a posteriori error analysis. Specifically, we devise an adaptive algorithm to refine and unrefine the finite element mesh at each step in the descent search algorithm. We give basic examples and apply this technique to a model of a healing wound. In the second part, we construct a mathematical framework for coupling atomistic models with continuum models. We first study the case of coupling two deterministic diffusive regions with a common interface. We construct a fixed point map by repeatedly solving the problems, while passing the flux in one direction and the concentration in the other direction. We examine criteria for the fixed point iteration to converge, and offer remedies such as reversing the direction of the coupling, or relaxation, for the case it does not. We then study the one dimensional case where the particles undergo a random walk on a lattice, next to a continuum region. As the atomistic region is random, this technique yields a fixed point iteration of distributions. We run numerical tests to study the long term behavior of such an iteration, and compare the results with the deterministic case. We also discuss a probability transition matrix approach, in which we assume that the boundary conditions at each iterations follow a Markov chain.
  • ItemOpen Access
    Modeling spatio-temporal systems with skew radial basis functions: theory, algorithms and applications
    (Colorado State University. Libraries, 2008) Jamshidi, Arthur (Arta) Amir, author; Kirby, Michael, advisor
    The discovery of knowledge in large data sets can often be formulated as a problem in nonlinear function approximation. The inherent challenge in such an approach is that the data is often high dimensional, scattered and sparse. Given a limited number of exemplars one would like to construct models that can generalize to new regions or events. Additionally, underlying physical processes may not be stationary and the nature of the nonlinear relationships may evolve. Ideally, a good model would be adaptive and remain valid over extended regions in space and time. In this work we propose a new Radial Basis Function (RBF) algorithm for constructing nonlinear models from high-dimensional scattered data. The algorithm progresses iteratively adding a new function at each step to refine the model. The placement of the functions is driven by one or more statistical hypotheses tests that reveal geometric structure in the data when it fails. At each step the added function is fit to data contained in a spatio-temporally defined local region to determine the parameters, in particular, the scale of the local model. Unlike prior techniques for nonlinear function fitting over scattered data, the proposed method requires no ad hoc parameters and it behaves effectively like a black box. Thus, the number of basis functions required for an accurate fit is determined automatically by the algorithm. An extension of the algorithms to multivariate case, i.e., the dimension of the range of the mapping is greater or equal to two, is also carried out. This approach produces more parsimonious models by exploiting the correlation among the various range dimensions. The convergence properties of the algorithms are shown from different prospectives. To further enhance the order and conditioning of the models we introduce several new compactly supported RBFs for approximating functions in LP(Rd) via over-determined least squares. We also propose a skew-radial basis function expansion for the empirical model fitting problem to achieve more accuracy and lower model orders. This is accomplished by modulating or skewing, each RBF by an asymmetric shape function which increases the number of degrees of freedom available to fit the data. We show that if the original RBF interpolation problem is positive definite, then so is the skew-radial basis function when it is viewed as a bounded perturbation of the RBF. We illustrate the utility of the theoretic and algorithmic innovations via several applications including modeling data on manifolds, prediction of financial and chaotic time-series and prediction of the maximum wind intensity of a hurricane. In addition, the skew-radial basis functions are shown to provide good approximations to data with jumps. While the algorithms presented here are in the context of RBFs, in principle they can be employed with other methods for function approximation such as multi-layer perceptrons.
  • ItemOpen Access
    Automorphism towers of general linear groups
    (Colorado State University. Libraries, 2008) Jónsdóttir, Margrét Sóley, author; Hulpke, Alexander, advisor
    Let G0, be a group, G 1 be the automorphism group of G0, G2 the automorphism group of G1 etc. The sequence of these groups together with the natural homomorphisms πi,i+1 : Gi → Gi+1, which take each element to the inner automorphism it induces, is called the automorphism tower of G 0. If πi,i+1 is an isomorphism for some i then the automorphism tower of G is said to terminate. For a given group it is in general not easy to say whether its automorphism tower terminates. Wielandt showed in 1939 that if G is finite with a trivial center then the automorphism tower of G will terminate in a finite number of steps. Since then, some sporadic examples of automorphism towers of finite groups have been described but no general results have been proven. In this thesis we study automorphism towers of finite groups with a non-trivial center. We look at the two extremes: (1) Groups which are center-rich. (2) Groups which have a small but non-trivial center. We show that when looking for an infinite family of groups with terminating automorphism towers the first case is unfeasible. We then turn our attention to the latter case, specifically general linear groups of dimension at least two. In odd characteristic GL(2, q) is not a split extension of the center. The first thing we do is to calculate the automorphism group of GL(2, q) for odd prime powers q. We provide explicit generators and describe the structure of Aut(GL(2, q)) in terms of well-known groups. In this case, the first automorphism group in the tower is a subdirect product of two characteristic factors. This structure is propagated through the tower and we use it to reduce the problem to studying subgroups of automorphism groups of smaller groups. We then use this structure to compute examples of automorphism towers of GL(2, q).
  • ItemOpen Access
    A ratio ergodic theorem on Borel actions of Zd and Rd
    (Colorado State University. Libraries, 2009) Holt, Eric Norman, author; Rudolph, Daniel, advisor
    We prove a ratio ergodic theorem for free Borel actions of Zd and Rd on a standard Borel probability space. The proof employs an extension of the Besicovitch Covering Lemma, as well as a notion of coarse dimension that originates in an upcoming paper of Hochman. Due to possible singularity of the measure, we cannot use functional analytic arguments and therefore diffuse the measure onto the orbits of the action. This diffused measure is denoted μx, and our averages are of the form 1/μx(Bn) ∫ Bn f o T-v(x)dμx. A Følner condition on the orbits of the action is shown, which is the main tool used in the proof of the ergodic theorem. Also, an extension of a known example of divergence of a ratio average is presented for which the action is both conservative and free.
  • ItemOpen Access
    Characteristics of certain families of random graphs
    (Colorado State University. Libraries, 2009) Hampson, Christian Paul, author; Achter, Jeff, advisor
    Many random network models can be expressed as the product space of the probability space of the individual edges. In these cases, the model can be expressed using a matrix of the probabilities of each edge. I then analyze these models using their respective probability matrices. Degree distribution and the larger eigenvalues are among the attributes whose values can be bound by examining the same attributes of the probability matrix. I also bound the difference between the eigenvalues of the adjacency matrix of a member of a random graph model and the eigenvalues of the probability matrix for the model. In addition I find the neighborhood expansion properties for three separate edge-product models.
  • ItemOpen Access
    Ramsey regions and simplicial homology tables for graphs
    (Colorado State University. Libraries, 2008) Frederick, Christopher Austin, author; Peterson, Chris, advisor
    Ramsey Theory is the investigation of edge-colored graphs which force a monochromatic subgraph. We devise a way of breaking certain Ramsey Theory problems into "smaller" pieces so that information about Ramsey Theory can be gained without solving the entire problem, (which is often difficult to solve). Next the work with Ramsey Regions for graphs is translated into the language of hypergraphs. Theorems and techniques are reworked to fit appropriately into the setting of hypergraphs. The work of persistence complex on large data sets is examined in the setting of graphs. Various simplicial complexes can be assigned to a graph. For a given simplicial complex the persistence complex can be constructed, giving a highly detailed graph invariant. Connections between the graph and persistence complex are investigated.
  • ItemOpen Access
    Signal fraction analysis for subspace processing of high dimensional data
    (Colorado State University. Libraries, 2007) Emdad, Fatemeh, author; Kirby, Michael, advisor
    A general tool for computing subspaces that decomposes data into potentially useful features is proposed. The technique is called Signal Fraction Analysis (SFA). The row-energy and column-energy optimization problems for signal-to-signal ratios are investigated. A generalized singular value problem is presented. This setting is distinguished from the Singular Value Decomposition (SVD). Preprocessing mappings of the data is used in situations where domain specific knowledge is available as a guide. We suggest an optimization problem where these mapping functions may be adapted using a problem dependent objective function. These ideas are illustrated using Wavelet and Fourier filters applied to EEG data. A self-contained description of the motivating maximum noise fraction method is included and a procedure for estimating the covariance matrix of the noise is described. We extend SFA by introducing novel constraints and propose two new generalized SVD type problems for computing subspace representations. A connection between SFA and Canonical Correlation Analysis is maintained. We implement and investigate a nonlinear extension to SFA based on a kernel method, i.e., Kernel SFA. Moreover, a second algorithm that uses noise adjustment in the data domain prior to kernelization is suggested. We include a detailed derivation of the methodology using kernel principal component analysis as a prototype. These methods are compared using toy examples and the benefits of KSFA are illustrated. This work establishes the potential of a SFA beamforming technique via its merger with a wide band MC-CDMA system. The details of non-overlapping window adaptive realization of SFA are introduced. We discuss the relationship between the SFA and DOA estimation via MUSIC. A novel structure for wide band MC-CDMA systems that utilizes the benefits of path diversity (inherent in direct sequence CDMA) and frequency diversity (inherent in MC-CDMA systems) is introduced. Simulations were performed to study the impact of noise perturbations on the performance of SFA. Simulations confirm that SFA enhances the performance and separability of interfering users. KSFA is applied to the classification of EEG data arising in the Brain Computer Interface Problem. We use Fourier and Wavelet filters to generate signal fractions as well as differencing methods.
  • ItemOpen Access
    Toward a type B(n) geometric Littlewood-Richardson Rule
    (Colorado State University. Libraries, 2007) Davis, Diane E., author; Kley, Holger, advisor
    We conjecture a geometric Littlewood-Richardson Rule for the maximal orthogonal Grassmannian and make significant advances in the proof of this conjecture. We consider Schubert calculus in the presence of a nondegenerate symmetric bilinear form on an odd-dimensional vector space (the type Bn setting) and use degenerations to understand intersections of Schubert varieties in the odd orthogonal Grassmannian. We describe the degenerations using combinatorial objects called checker games. This work is closely related to Vakil's Geometric Littlewood-Richardson Rule (Annals of Mathematics, 164).
  • ItemOpen Access
    Numerical solutions of nonlinear systems derived from semilinear elliptic equations
    (Colorado State University. Libraries, 2007) Cruceanu, Stefan-Gicu, author; Allgower, Eugene, advisor; Tavener, Simon, advisor
    The existence and the number of solutions for N-dimensional nonlinear boundary value problems has been studied from a theoretical point of view, but there is no general result that states how many solutions such a problem has or even to determine the existence of a solution. Numerical approximation of all solutions (complex and real) of systems of polynomials can be performed using numerical continuation methods. In this thesis, we adapt numerical continuation methods to compute all solutions of finite difference discretizations of boundary value problems in 2-dimensions involving the Laplacian. Using a homotopy deformation, new solutions on finer meshes are obtained from solutions on coarser meshes. The issue that we have to deal with is that the number of the solutions of the complex polynomial systems grows with the number of mesh points of the discretization. Hence, the need of some filters becomes necessary in this process. We remark that in May 2005, E. Allgower, D. Bates, A. Sommese, and C. Wampler used in [1] a similar strategy for finding all the solutions of two-point boundary value problems in 1-dimension with polynomial nonlinearities on the right hand side. Using exclusion algorithms, we were able to handle general nonlinearities. When tracking solutions sets of complex polynomial systems an issue of bifurcation or near bifurcation of paths arises. One remedy for this is to use the gamma-trick introduced by Sommese and Wampler in [2]. In this thesis we show that bifurcations necessarily occur at turning points of paths and we use this fact to numerically handle the bifurcation, when mappings are analytic.
  • ItemOpen Access
    Classification on the Grassmannians: theory and applications
    (Colorado State University. Libraries, 2008) Chang, Jen-Mei, author; Kirby, Michael, advisor
    This dissertation consists of four parts. It introduces a novel geometric framework for the general classification problem and presents empirical results obtained from applying the proposed method on some popular classification problems. An analysis of the robustness of the method is provided using matrix perturbation theory, which in turn motivates an optimization problem to improve the robustness of the classifier. Lastly, we illustrate the use of compressed data representations based on Karcher mean.
  • ItemOpen Access
    Computational measure theoretic approach to inverse sensitivity analysis: methods and analysis
    (Colorado State University. Libraries, 2009) Butler, Troy Daniel, author; Estep, Donald, advisor
    We consider the inverse problem of quantifying the uncertainty of inputs to a finite dimensional map, e.g. determined implicitly by solution of a nonlinear system, given specified uncertainty in a linear functional of the output of the map. The uncertainty in the output functional might be suggested by experimental error or imposed as part of a sensitivity analysis. We describe this problem probabilistically, so that the uncertainty in the quantity of interest is represented by a random variable with a known distribution, and we assume that the map from the input space to the quantity of interest is smooth. We derive an efficient method for determining the unique solution to the problem of inverting through a many-to-one map by computing set-valued inverses of the input space which combines a forward sensitivity analysis with the Implicit Function Theorem. We then derive an efficient computational measure theoretic approach to further invert into the entire input space resulting in an approximate probability measure on the input space.
  • ItemOpen Access
    Short time analysis of deterministic ODE solutions and the expected value of the corresponding birth-death process
    (Colorado State University. Libraries, 2009) Buzby, Megan H., author; Estep, Donald, advisor
    There is a standard way to construct a discrete birth-death probability model for an evolution system, in which an ODE model of the system is used to define the probabilities governing the evolution of the stochastic model. Given the significant differences in the dynamical behavior of ODE solutions which are inherently smooth, and stochastic models which are subject to random variation, the question naturally arises about the connection between the two models. In particular, we investigate the validity of using a continuum model to define the evolution of a stochastic model.
  • ItemOpen Access
    Electrical impedance tomography reconstructions in two and three dimensions: from Calderón to direct methods
    (Colorado State University. Libraries, 2009) Bikowski, Jutta, author; Mueller, Jennifer, advisor
    Electrical Impedance Tomography (EIT) uses voltage and current measurements from the boundary to reconstruct the electrical conductivity distribution inside an unknown object. In this dissertation two different EIT reconstruction algorithms are investigated. The first was introduced by A. P. Calderón [ Soc. Bras. de Mat., (1980), pp. 65-73]. His method was implemented and successfully applied to both numerical and experimental data in two dimensions, including a phantom that models a cross section of a human chest and data taken from a human chest.
  • ItemOpen Access
    Two topics in combinatorial optimization: the domino portrait problem and the maximum clique problem
    (Colorado State University. Libraries, 2007) Alshamary, Bader, author; Betten, Anton, advisor
    Combinatorial Optimization plays a significant role in applied mathematics, supplying solutions to many scientific problems in a variety of fields, including computer science and computational networks. This dissertation first reviews a number of problems from combinatorial optimization and the algorithms used to solve them.
  • ItemOpen Access
    Classification algorithms for graphs, digraphs, and linear spaces
    (Colorado State University. Libraries, 2007) Al-Azemi, Abdullah, author; Betten, Anton, advisor
    Combinatorial incidence structures like graphs, digraphs, and linear spaces are defined modulo an isomorphism relation. Typically we are interested in determining complete systems of representatives of the isomorphism classes, in order to test conjectures or to prove existence or non-existence of examples for new theorems.
  • ItemOpen Access
    Expected distances on homogeneous manifolds and notes on pattern formation
    (Colorado State University. Libraries, 2023) Balch, Brenden, author; Shipman, Patrick, advisor; Bradley, Mark, committee member; Shonkwiler, Clay, committee member; Peterson, Chris, committee member; Chen, Hua, committee member
    Flag manifolds are generalizations of projective spaces and other Grassmannians: they parametrize flags, which are nested sequences of subspaces in a given vector space. These are important objects in algebraic and differential geometry, but are also increasingly being used in data science, where many types of data are properly understood as subspaces rather than vectors. In Chapter 1 of this dissertation, we discuss partially oriented flag manifolds, which parametrize flags in which some of the subspaces may be endowed with an orientation. We compute the expected distance between random points on some low-dimensional examples, which we view as a statistical baseline against which to compare the distances between particular partially oriented flags coming from geometry or data. Lens spaces are a family of manifolds that have been a source of many interesting phenomena in topology and differential geometry. Their concrete construction, as quotients of odd-dimensional spheres by a free linear action of a finite cyclic group, allows a deeper analysis of their structure. In Chapter 2, we consider the problem of moments for the distance function between randomly selected pairs of points on homogeneous three-dimensional lens spaces. We give a derivation of a recursion relation for the moments, a formula for the kth moment, and a formula for the moment generating function, as well as an explicit formula for the volume of balls of all radii in these lens spaces. Motivated by previous results showing that the addition of a linear dispersive term to the two-dimensional Kuramoto-Sivashinsky equation has a dramatic effect on the pattern formation, we study the Swift-Hohenberg equation with an added linear dispersive term, the dispersive Swift-Hohenberg equation (DSHE) in Chapter 3. The DSHE produces stripe patterns with spatially extended defects that we call seams. A seam is defined to be a dislocation that is smeared out along a line segment that is obliquely oriented relative to an axis of reflectional symmetry. In contrast to the dispersive Kuramoto-Sivashinsky equation, the DSHE has a narrow band of unstable wavelengths close to an instability threshold. This allows for analytical progress to be made. We show that the amplitude equation for the DSHE close to threshold is a special case of the anisotropic complex Ginzburg-Landau equation (ACGLE) and that seams in the DSHE correspond to spiral waves in the ACGLE. Seam defects and the corresponding spiral waves tend to organize themselves into chains, and we obtain formulas for the velocity of the spiral wave cores and for the spacing between them. In the limit of strong dispersion, a perturbative analysis yields a relationship between the amplitude and wavelength of a stripe pattern and its propagation velocity. Numerical integrations of the ACGLE and the DSHE confirm these analytical results. Chapter 4 explores the measurement and characterization of order in non-equilibrium pattern forming systems. The study focuses on the use of topological measures of order, via persistent homology and the Wasserstein metric. We investigate the quantification of order with respect to ideal lattice patterns and demonstrate the effectiveness of the introduced measures of order in analyzing imperfect three-dimensional patterns and their time evolution. The paper provides valuable insights into the complex pattern formation and contributes to the understanding of order in three dimensions.