Theses and Dissertations
Permanent URI for this collection
Browse
Browsing Theses and Dissertations by Issue Date
Now showing 1 - 20 of 187
Results Per Page
Sort Options
Item Open Access An implicit method for water wave problems(Colorado State University. Libraries, 1983) Aston, Martha B., author; Thomas, J. W. (James William), 1941-, advisor; Zachmann, David W., committee member; Schubert, Wayne H., committee memberThis paper presents an implicit scheme for numerically simulating fluid flow in the presence of a free surface. The scheme couples numerical generation of a boundary-fitted coordinate system with an efficient solution of the finite difference equations. The method solves the two dimensional Navier-Stokes equations by applying an implicit backward in time difference scheme which is linearized by Taylor series expansion about the known time level to produce a system of linear difference equations. The difference equations are solved by an Alternating-Direction-Implicit procedure which defines a sequence of one dimensional block tridiagonal matrix equations. A standard block elimination scheme solves the one dimensional equations. For each time step, solutions for all equations are calculated simultaneously and noniteratively. Preliminary solutions of free surface fluid flow in an open channel are presented. These solutions are examined to define initial stability criteria for the numerical scheme.Item Open Access Network multiple frame assignment architectures(Colorado State University. Libraries, 2001) Lu, Suihua, author; Poore, Aubrey, advisor; Kirby, Michael J., committee member; Zachmann, David W., committee member; Miranda, Rick, 1953-, committee memberMultiple target tracking methods divide into two broad classes, namely single frame and multiple frame methods. The most successful of the multiple frame methods are multiple hypothesis tracking (MHT) and multiple frame assignments (MFA). In dense tracking environments the performance improvements of multiple frame methods over single frame methods is very significant, making it the preferred solution for many tracking problems. Thus, in addition to the availability single frame processing, multiple frame data association methods are an essential class of methods for almost all tracking needs. The application of multiple frame tracking methods must consider an architecture in which the sensors are distributed across multiple platforms. Such geometric and sensor diversity has the potential to significantly enhance tracking and discrimination accuracy. A centralized architecture in which all measurements are sent to one location and processed with tracks being transmitted back to the different platforms is a simple one that is probably optimal in that it is capable of producing the best track quality (e.g., purity and accuracy) and a consistent air picture. The centralized tracker is, however, unacceptable for several reasons, notably the communication overloads and single-point-failure. Thus, one must turn to a distributed architecture for both estimation/fusion and data association. One of the simplest network-centric architectures is that of placing a centralized tracker on each platform. The architecture is called Network MFA Centralized, which removes the problem of single-point-failure. However, due to communication delays in the network, the order the measurements arrive at different platforms varies. Each composite tracker is making its own tracking decisions based on the data it receives, regardless of decisions of other platforms. Therefore, a consistent air picture may not be achieved across the network. Thus, the objective of this thesis is the development of two near-optimal Network-Centric MFA architectures, namely Network MFA on Local Data and Network Tracks and Network MFA on All data and Network Tracks, that preserve the quality of a centralized tracker across a network of platforms while managing communication loading and achieving a consistent air picture. One technique that has proved useful for achieving SlAP is to require that each platform be in charge of assigning its own measurements to the network tracks. In the architecture of Network MFA on Local Data and Network Tracks, only local data are used in the sliding windows and track initiations are based on local data only. In the architecture of Network MFA on All Data and Network Tracks, all data (remote and local) are used in the sliding window. Communication loading is only addressed by the architectures in that track states and their error covariances are not required to be transmitted back to the various platforms. The results of extensive computations are presented to validate the differences in four tracking architectures.Item Open Access Two topics in combinatorial optimization: the domino portrait problem and the maximum clique problem(Colorado State University. Libraries, 2007) Alshamary, Bader, author; Betten, Anton, advisorCombinatorial Optimization plays a significant role in applied mathematics, supplying solutions to many scientific problems in a variety of fields, including computer science and computational networks. This dissertation first reviews a number of problems from combinatorial optimization and the algorithms used to solve them.Item Open Access Classification algorithms for graphs, digraphs, and linear spaces(Colorado State University. Libraries, 2007) Al-Azemi, Abdullah, author; Betten, Anton, advisorCombinatorial incidence structures like graphs, digraphs, and linear spaces are defined modulo an isomorphism relation. Typically we are interested in determining complete systems of representatives of the isomorphism classes, in order to test conjectures or to prove existence or non-existence of examples for new theorems.Item Open Access Numerical solutions of nonlinear systems derived from semilinear elliptic equations(Colorado State University. Libraries, 2007) Cruceanu, Stefan-Gicu, author; Allgower, Eugene, advisor; Tavener, Simon, advisorThe existence and the number of solutions for N-dimensional nonlinear boundary value problems has been studied from a theoretical point of view, but there is no general result that states how many solutions such a problem has or even to determine the existence of a solution. Numerical approximation of all solutions (complex and real) of systems of polynomials can be performed using numerical continuation methods. In this thesis, we adapt numerical continuation methods to compute all solutions of finite difference discretizations of boundary value problems in 2-dimensions involving the Laplacian. Using a homotopy deformation, new solutions on finer meshes are obtained from solutions on coarser meshes. The issue that we have to deal with is that the number of the solutions of the complex polynomial systems grows with the number of mesh points of the discretization. Hence, the need of some filters becomes necessary in this process. We remark that in May 2005, E. Allgower, D. Bates, A. Sommese, and C. Wampler used in [1] a similar strategy for finding all the solutions of two-point boundary value problems in 1-dimension with polynomial nonlinearities on the right hand side. Using exclusion algorithms, we were able to handle general nonlinearities. When tracking solutions sets of complex polynomial systems an issue of bifurcation or near bifurcation of paths arises. One remedy for this is to use the gamma-trick introduced by Sommese and Wampler in [2]. In this thesis we show that bifurcations necessarily occur at turning points of paths and we use this fact to numerically handle the bifurcation, when mappings are analytic.Item Open Access Toward a type B(n) geometric Littlewood-Richardson Rule(Colorado State University. Libraries, 2007) Davis, Diane E., author; Kley, Holger, advisorWe conjecture a geometric Littlewood-Richardson Rule for the maximal orthogonal Grassmannian and make significant advances in the proof of this conjecture. We consider Schubert calculus in the presence of a nondegenerate symmetric bilinear form on an odd-dimensional vector space (the type Bn setting) and use degenerations to understand intersections of Schubert varieties in the odd orthogonal Grassmannian. We describe the degenerations using combinatorial objects called checker games. This work is closely related to Vakil's Geometric Littlewood-Richardson Rule (Annals of Mathematics, 164).Item Open Access Signal fraction analysis for subspace processing of high dimensional data(Colorado State University. Libraries, 2007) Emdad, Fatemeh, author; Kirby, Michael, advisorA general tool for computing subspaces that decomposes data into potentially useful features is proposed. The technique is called Signal Fraction Analysis (SFA). The row-energy and column-energy optimization problems for signal-to-signal ratios are investigated. A generalized singular value problem is presented. This setting is distinguished from the Singular Value Decomposition (SVD). Preprocessing mappings of the data is used in situations where domain specific knowledge is available as a guide. We suggest an optimization problem where these mapping functions may be adapted using a problem dependent objective function. These ideas are illustrated using Wavelet and Fourier filters applied to EEG data. A self-contained description of the motivating maximum noise fraction method is included and a procedure for estimating the covariance matrix of the noise is described. We extend SFA by introducing novel constraints and propose two new generalized SVD type problems for computing subspace representations. A connection between SFA and Canonical Correlation Analysis is maintained. We implement and investigate a nonlinear extension to SFA based on a kernel method, i.e., Kernel SFA. Moreover, a second algorithm that uses noise adjustment in the data domain prior to kernelization is suggested. We include a detailed derivation of the methodology using kernel principal component analysis as a prototype. These methods are compared using toy examples and the benefits of KSFA are illustrated. This work establishes the potential of a SFA beamforming technique via its merger with a wide band MC-CDMA system. The details of non-overlapping window adaptive realization of SFA are introduced. We discuss the relationship between the SFA and DOA estimation via MUSIC. A novel structure for wide band MC-CDMA systems that utilizes the benefits of path diversity (inherent in direct sequence CDMA) and frequency diversity (inherent in MC-CDMA systems) is introduced. Simulations were performed to study the impact of noise perturbations on the performance of SFA. Simulations confirm that SFA enhances the performance and separability of interfering users. KSFA is applied to the classification of EEG data arising in the Brain Computer Interface Problem. We use Fourier and Wavelet filters to generate signal fractions as well as differencing methods.Item Open Access Classification on the Grassmannians: theory and applications(Colorado State University. Libraries, 2008) Chang, Jen-Mei, author; Kirby, Michael, advisorThis dissertation consists of four parts. It introduces a novel geometric framework for the general classification problem and presents empirical results obtained from applying the proposed method on some popular classification problems. An analysis of the robustness of the method is provided using matrix perturbation theory, which in turn motivates an optimization problem to improve the robustness of the classifier. Lastly, we illustrate the use of compressed data representations based on Karcher mean.Item Open Access An adaptive algorithm for an elliptic optimization problem, and stochastic-deterministic coupling: a mathematical framework(Colorado State University. Libraries, 2008) Lee, Sheldon, author; Estep, Donald, advisor; Tavener, Simon, advisorThis dissertation consists of two parts. In the first part, we study optimization of a quantity of interest of a solution of an elliptic problem, with respect to parameters in the data using a gradient search algorithm. We use the generalized Green's function as an efficient way to compute the gradient. We analyze the effect of numerical error on a gradient search, and develop an efficient way to control these errors using a posteriori error analysis. Specifically, we devise an adaptive algorithm to refine and unrefine the finite element mesh at each step in the descent search algorithm. We give basic examples and apply this technique to a model of a healing wound. In the second part, we construct a mathematical framework for coupling atomistic models with continuum models. We first study the case of coupling two deterministic diffusive regions with a common interface. We construct a fixed point map by repeatedly solving the problems, while passing the flux in one direction and the concentration in the other direction. We examine criteria for the fixed point iteration to converge, and offer remedies such as reversing the direction of the coupling, or relaxation, for the case it does not. We then study the one dimensional case where the particles undergo a random walk on a lattice, next to a continuum region. As the atomistic region is random, this technique yields a fixed point iteration of distributions. We run numerical tests to study the long term behavior of such an iteration, and compare the results with the deterministic case. We also discuss a probability transition matrix approach, in which we assume that the boundary conditions at each iterations follow a Markov chain.Item Open Access Radial basis functions for color conversion(Colorado State University. Libraries, 2008) Qiao, Yue, author; Kirby, Michael, advisorThe most difficult and challenging task in color printing is to reduce costs while maintaining superior quality. This dissertation proposes significant enhancements to printer color conversion techniques including accurate nonlinear models that incorporate perceptual color difference metrics, lossless gray component replacement (GCR) transformations, optimized toner saving algorithms and numerical/perceptual based gamut mapping methods. Radial Basis Functions (RBFs) combined with the Lp norm approximation with emphasis on L1, L2, and L∞ were developed for color conversion. The exchange algorithm was employed in the L∞ and L1 approximations with RBFs that satisfy the Haar condition. Both the Barrodale and Phillips (BP) algorithm for solving the dual problem and the Bartels and Conn's (BC) algorithm for solving the primal were extended to multidimensional color conversion. A new approach for lossless GCR was achieved by finding one dimensional color manifolds in the CMIYK color space using multidimensional optimization techniques. We proposed objective functions for toner savings, cost savings, etc., with no quality degradation. The color conversion with the toner/ink limitation problem was solved via both L1 and L∞ approximation algorithms in the neutral and saturated color regions respectively. The L1 algorithm was a modified Barrodale and Roberts (BR) primal algorithm with an added constraint, while the L∞ algorithm was developed based on the BP dual algorithm which extended the three-stage algorithm to a four-stage algorithm. A novel gamut mapping algorithm was developed based on the numerical model guided by a perceptual color difference model. The direction of the gamut mapping is not fixed as in other methods. The algorithm transformed a connected out-of-gamut colors to connected colors around the boundary of the device gamut. The out-of-gamut colors in a small neighborhood vary continuously and smoothly. Our results indicated that the color conversion quality was significantly improved. The lossless GCR algorithm is accurate and efficient. Both the BP and BC algorithms for solving the toner/ink limitation are able to convert colors from CIELab to CMY with any given toner/ink limitation. We foresee this research will have significant impact on the color reproduction industry.Item Open Access Ramsey regions and simplicial homology tables for graphs(Colorado State University. Libraries, 2008) Frederick, Christopher Austin, author; Peterson, Chris, advisorRamsey Theory is the investigation of edge-colored graphs which force a monochromatic subgraph. We devise a way of breaking certain Ramsey Theory problems into "smaller" pieces so that information about Ramsey Theory can be gained without solving the entire problem, (which is often difficult to solve). Next the work with Ramsey Regions for graphs is translated into the language of hypergraphs. Theorems and techniques are reworked to fit appropriately into the setting of hypergraphs. The work of persistence complex on large data sets is examined in the setting of graphs. Various simplicial complexes can be assigned to a graph. For a given simplicial complex the persistence complex can be constructed, giving a highly detailed graph invariant. Connections between the graph and persistence complex are investigated.Item Open Access Mathematical methods for fluid-solid interfaces: meandering streams and sand ripples(Colorado State University. Libraries, 2008) Mertens, Keith, author; Putkaradze, Vakhtang, advisorThis thesis presents several mathematical methods for modeling free surfaces, interfaces, and fluid-solid interactions. This work is done in the context of two physical systems. In the first two sections, the focus will be to understand the the physics of streams flowing down inclined substrates. Models will be derived to investigate both steady state and dynamic meandering profiles. It will be shown that, through the right approximation techniques, many physical insights can be drawn about this system. These results include: a complete understanding of the steady states, transitions between steady states, mechanism of meandering, forces involved in meandering, and spectral scaling laws of long-time ensemble averaged meandering stream profiles. In the third section, the focus will shift to how one can model underlying physics when it becomes too complicated to address from first principles. Here, the power of symmetries and conservation laws are explored to derive an amplitude equation describing the interface between sand and water when the water is subjected to oscillatory flow. The thesis will then close by posing a novel way to study scaling laws with respect to parameters using Lie's prolongation algorithm. Through this work various tools will be combined from the fields of physics, engineering, applied and pure mathematics to develop approaches for reducing complex systems into tractable pieces which can be studied carefully.Item Open Access Modeling spatio-temporal systems with skew radial basis functions: theory, algorithms and applications(Colorado State University. Libraries, 2008) Jamshidi, Arthur (Arta) Amir, author; Kirby, Michael, advisorThe discovery of knowledge in large data sets can often be formulated as a problem in nonlinear function approximation. The inherent challenge in such an approach is that the data is often high dimensional, scattered and sparse. Given a limited number of exemplars one would like to construct models that can generalize to new regions or events. Additionally, underlying physical processes may not be stationary and the nature of the nonlinear relationships may evolve. Ideally, a good model would be adaptive and remain valid over extended regions in space and time. In this work we propose a new Radial Basis Function (RBF) algorithm for constructing nonlinear models from high-dimensional scattered data. The algorithm progresses iteratively adding a new function at each step to refine the model. The placement of the functions is driven by one or more statistical hypotheses tests that reveal geometric structure in the data when it fails. At each step the added function is fit to data contained in a spatio-temporally defined local region to determine the parameters, in particular, the scale of the local model. Unlike prior techniques for nonlinear function fitting over scattered data, the proposed method requires no ad hoc parameters and it behaves effectively like a black box. Thus, the number of basis functions required for an accurate fit is determined automatically by the algorithm. An extension of the algorithms to multivariate case, i.e., the dimension of the range of the mapping is greater or equal to two, is also carried out. This approach produces more parsimonious models by exploiting the correlation among the various range dimensions. The convergence properties of the algorithms are shown from different prospectives. To further enhance the order and conditioning of the models we introduce several new compactly supported RBFs for approximating functions in LP(Rd) via over-determined least squares. We also propose a skew-radial basis function expansion for the empirical model fitting problem to achieve more accuracy and lower model orders. This is accomplished by modulating or skewing, each RBF by an asymmetric shape function which increases the number of degrees of freedom available to fit the data. We show that if the original RBF interpolation problem is positive definite, then so is the skew-radial basis function when it is viewed as a bounded perturbation of the RBF. We illustrate the utility of the theoretic and algorithmic innovations via several applications including modeling data on manifolds, prediction of financial and chaotic time-series and prediction of the maximum wind intensity of a hurricane. In addition, the skew-radial basis functions are shown to provide good approximations to data with jumps. While the algorithms presented here are in the context of RBFs, in principle they can be employed with other methods for function approximation such as multi-layer perceptrons.Item Open Access Large-scale computational analysis of National Animal Identification System mock data, including traceback and trace forward(Colorado State University. Libraries, 2008) Ladd, Joshua, author; Burns, Patrick J., advisorCattle production is the single largest segment of U.S. agriculture. Animal disease, whether a single incident or a full-scale outbreak, can result in significantly restricted access to both foreign and domestic markets. Regaining consumer confidence is difficult. If a disease cannot be traced back to a common source, then only time can tell whether or not eradication and containment efforts have been successful. Simply "waiting it out" can result in long-term economic losses on a National scale especially when diseases which are prone to epizootic outbreaks or those with long incubation periods are involved. The United States Department of Agriculture (USDA) maintains that traceability is the key to protecting animal health and marketability: The National Animal Identification System (NAIS) is a voluntary disease traceability framework released by the USDA. Many of the efforts surrounding the development of the NAIS have encompassed the identification of livestock production and handling premises as well as individuals or herds of animals, whereas little effort has been directed toward the ultimate goal of animal traceback in 48 hours. In this dissertation, computational science is applied to the problem of animal disease traceability. In particular, a computational model is developed for the purpose of conducting large-scale traceability simulations. The model consists of two components; the first being a parallel, Monte Carlo discrete events simulator capable of generating large, NAIS-compliant, mock datasets representative of the processing requirements of actual NAIS data. The second component is a large-scale, parallel disease tracing algorithm that is mapped onto an SMP supercomputer where high-performance is achieved by adopting a hybrid parallel programming model that mixes a shared memory multi-threading model (OpenMP) with a distributed memory message passing model (MPI). The objectives of this dissertation are to characterize the computational requirements of the NAIS, identify computational platforms and programming paradigms well suited to this effort, and to identify and address computational performance bottlenecks associated with large-scale tracing algorithms.Item Open Access Automorphism towers of general linear groups(Colorado State University. Libraries, 2008) Jónsdóttir, Margrét Sóley, author; Hulpke, Alexander, advisorLet G0, be a group, G 1 be the automorphism group of G0, G2 the automorphism group of G1 etc. The sequence of these groups together with the natural homomorphisms πi,i+1 : Gi → Gi+1, which take each element to the inner automorphism it induces, is called the automorphism tower of G 0. If πi,i+1 is an isomorphism for some i then the automorphism tower of G is said to terminate. For a given group it is in general not easy to say whether its automorphism tower terminates. Wielandt showed in 1939 that if G is finite with a trivial center then the automorphism tower of G will terminate in a finite number of steps. Since then, some sporadic examples of automorphism towers of finite groups have been described but no general results have been proven. In this thesis we study automorphism towers of finite groups with a non-trivial center. We look at the two extremes: (1) Groups which are center-rich. (2) Groups which have a small but non-trivial center. We show that when looking for an infinite family of groups with terminating automorphism towers the first case is unfeasible. We then turn our attention to the latter case, specifically general linear groups of dimension at least two. In odd characteristic GL(2, q) is not a split extension of the center. The first thing we do is to calculate the automorphism group of GL(2, q) for odd prime powers q. We provide explicit generators and describe the structure of Aut(GL(2, q)) in terms of well-known groups. In this case, the first automorphism group in the tower is a subdirect product of two characteristic factors. This structure is propagated through the tower and we use it to reduce the problem to studying subgroups of automorphism groups of smaller groups. We then use this structure to compute examples of automorphism towers of GL(2, q).Item Open Access Computational measure theoretic approach to inverse sensitivity analysis: methods and analysis(Colorado State University. Libraries, 2009) Butler, Troy Daniel, author; Estep, Donald, advisorWe consider the inverse problem of quantifying the uncertainty of inputs to a finite dimensional map, e.g. determined implicitly by solution of a nonlinear system, given specified uncertainty in a linear functional of the output of the map. The uncertainty in the output functional might be suggested by experimental error or imposed as part of a sensitivity analysis. We describe this problem probabilistically, so that the uncertainty in the quantity of interest is represented by a random variable with a known distribution, and we assume that the map from the input space to the quantity of interest is smooth. We derive an efficient method for determining the unique solution to the problem of inverting through a many-to-one map by computing set-valued inverses of the input space which combines a forward sensitivity analysis with the Implicit Function Theorem. We then derive an efficient computational measure theoretic approach to further invert into the entire input space resulting in an approximate probability measure on the input space.Item Open Access Short time analysis of deterministic ODE solutions and the expected value of the corresponding birth-death process(Colorado State University. Libraries, 2009) Buzby, Megan H., author; Estep, Donald, advisorThere is a standard way to construct a discrete birth-death probability model for an evolution system, in which an ODE model of the system is used to define the probabilities governing the evolution of the stochastic model. Given the significant differences in the dynamical behavior of ODE solutions which are inherently smooth, and stochastic models which are subject to random variation, the question naturally arises about the connection between the two models. In particular, we investigate the validity of using a continuum model to define the evolution of a stochastic model.Item Open Access Electrical impedance tomography reconstructions in two and three dimensions: from Calderón to direct methods(Colorado State University. Libraries, 2009) Bikowski, Jutta, author; Mueller, Jennifer, advisorElectrical Impedance Tomography (EIT) uses voltage and current measurements from the boundary to reconstruct the electrical conductivity distribution inside an unknown object. In this dissertation two different EIT reconstruction algorithms are investigated. The first was introduced by A. P. Calderón [ Soc. Bras. de Mat., (1980), pp. 65-73]. His method was implemented and successfully applied to both numerical and experimental data in two dimensions, including a phantom that models a cross section of a human chest and data taken from a human chest.Item Open Access A ratio ergodic theorem on Borel actions of Zd and Rd(Colorado State University. Libraries, 2009) Holt, Eric Norman, author; Rudolph, Daniel, advisorWe prove a ratio ergodic theorem for free Borel actions of Zd and Rd on a standard Borel probability space. The proof employs an extension of the Besicovitch Covering Lemma, as well as a notion of coarse dimension that originates in an upcoming paper of Hochman. Due to possible singularity of the measure, we cannot use functional analytic arguments and therefore diffuse the measure onto the orbits of the action. This diffused measure is denoted μx, and our averages are of the form 1/μx(Bn) ∫ Bn f o T-v(x)dμx. A Følner condition on the orbits of the action is shown, which is the main tool used in the proof of the ergodic theorem. Also, an extension of a known example of divergence of a ratio average is presented for which the action is both conservative and free.Item Open Access Characteristics of certain families of random graphs(Colorado State University. Libraries, 2009) Hampson, Christian Paul, author; Achter, Jeff, advisorMany random network models can be expressed as the product space of the probability space of the individual edges. In these cases, the model can be expressed using a matrix of the probabilities of each edge. I then analyze these models using their respective probability matrices. Degree distribution and the larger eigenvalues are among the attributes whose values can be bound by examining the same attributes of the probability matrix. I also bound the difference between the eigenvalues of the adjacency matrix of a member of a random graph model and the eigenvalues of the probability matrix for the model. In addition I find the neighborhood expansion properties for three separate edge-product models.