Browsing by Author "Hulpke, Alexander, committee member"
Now showing 1 - 20 of 29
Results Per Page
Sort Options
Item Open Access A novel method for rapid in vitro radiobioassay(Colorado State University. Libraries, 2015) Crawford, Evan Bogert, author; Zimbrick, John, advisor; Hulpke, Alexander, committee member; LaRosa, Jerome, committee member; Ramsdell, Howard, committee member; Steinhauser, Georg, committee memberRapid and accurate analysis of internal human exposure to radionuclides is essential to the effective triage and treatment of citizens who have possibly been exposed to radioactive materials in the environment. The two most likely scenarios in which a large number of citizens would be exposed are the detonation of a radiation dispersal device (RDD, "dirty bomb") or the accidental release of an isotope from an industrial source such as a radioisotopic thermal generator (RTG). In the event of the release and dispersion of radioactive materials into the environment in a large city, the entire population of the city - including all commuting workers and tourists - would have to be rapidly tested, both to satisfy the psychological needs of the citizens who were exposed to the mental trauma of a possible radiation dose, and to satisfy the immediate medical needs of those who received the highest doses and greatest levels of internal contamination - those who would best benefit from rapid, intensive medical care. In this research a prototype rapid screening method to screen urine samples for the presence of up to five isotopes, both individually and in a mixture, has been developed. The isotopes used to develop this method are Co-60, Sr-90, Cs-137, Pu-238, and Am-241. This method avoids time-intensive chemical separations via the preparation and counting of a single sample on multiple detectors, and analyzing the spectra for isotope-specific markers. A rapid liquid-liquid separation using an organic extractive scintillator can be used to help quantify the activity of the alpha-emitting isotopes. The method provides quantifiable results in less than five minutes for the activity of beta/gamma-emitting isotopes when present in the sample at the intervention level as defined by the Centers for Disease Control and Prevention (CDC), and quantifiable results for the activity levels of alpha-emitting isotopes present at their respective intervention levels in approximately 30 minutes of sample preparation and counting time. Radiation detector spectra - e.g. those from high-purity germanium (HPGe) gamma detectors and liquid scintillation detectors - which contain decay signals from multiple isotopes often have overlapping signals: the counts from one isotope's decay can appear in energy channels associated with another isotope's decay, complicating the calculation of each isotope's activity. The uncertainties associated with analyzing these spectra have been traced in order to determine the effects of one isotope's count rate on the sensitivity and uncertainty associated with each other isotope. The method that was developed takes advantage of activated carbon filtration to eliminate quenching effects and to make the liquid scintillation spectra from different urine samples comparable. The method uses pulse-shape analysis to reduce the interference from beta emitters in the liquid scintillation spectrum and improve the minimum detectable activity (MDA) and minimum quantifiable activity (MQA) for alpha emitters. The method uses an HPGe detector to quantify the activity of gamma emitters, and subtract their isotopes' contributions to the liquid scintillation spectra via a calibration factor, such that the pure beta and pure alpha emitters can be identified and quantified from the resulting liquid scintillation spectra. Finally, the method optionally uses extractive scintillators to rapidly separate the alpha emitters from the beta emitters when the activity from the beta emitters is too great to detect or quantify the activity from the alpha emitters without such a separation. The method is able to detect and quantify all five isotopes, with uncertainties and biases usually in the 10-40% range, depending upon the isotopic mixtures and the activity ratios between each of the isotopes.Item Open Access A quantum H*(T)-module via quasimap invariants(Colorado State University. Libraries, 2024) Lee, Jae Hwang, author; Shoemaker, Mark, advisor; Cavalieri, Renzo, advisor; Gillespie, Maria, committee member; Peterson, Christopher, committee member; Hulpke, Alexander, committee member; Chen, Hua, committee memberFor X a smooth projective variety, the quantum cohomology ring QH*(X) is a deformation of the usual cohomology ring H*(X), where the product structure is modified to incorporate quantum corrections. These correction terms are defined using Gromov-Witten invariants. When X is toric with geometric quotient description V//T, the cohomology ring H*(V//T) also has the structure of a H*(T)-module. In this paper, we introduce a new deformation of the cohomology of X using quasimap invariants with a light point. This defines a quantum H*(T)-module structure on H*(X) through a modified version of the WDVV equations. We explicitly compute this structure for the Hirzebruch surface of type 2. We conjecture that this new quantum module structure is isomorphic to the natural module structure of the Batyrev ring for a semipositive toric variety.Item Open Access A search for Lorentz and CPT violation in the neutrino sector of the standard model extension using the near detectors of the Tokai to Kamioka neutrino oscillation experiment(Colorado State University. Libraries, 2016) Clifton, Gary Alexander, author; Toki, Walter, advisor; Berger, Bruce, committee member; Eykholt, Richard, committee member; Hulpke, Alexander, committee memberThe Tokai to Kamioka (T2K) neutrino experiment is designed to search for electron neutrino appearance oscillations and muon neutrino disappearance oscillations. While the main physics goals of T2K fall into conventional physics, T2K may be used to search for more exotic physics. One exotic physics analysis that can be performed is a search for Lorentz and CPT symmetry violation (LV and CPTV) through short baseline neutrino oscillations. The theoretical framework which describes these phenomena is the Standard Model Extension (SME). Due to its off-axis nature, T2K has two near detectors. A search for LV and CPTV is performed in each detector. The search utilizes charged-current inclusive (CC inclusive) neutrino events to search for sidereal variations in the neutrino event rate at each detector. Two methods are developed; the first being a Fast Fourier Transform method to perform a hypothesis test of the data with a set of 10,000 toy Monte-Carlo simulations that do not have any LV signal in them. The second is a binned likelihood fit. Using three data sets, both analysis methods are consistent with no sidereal variations. One set of data is used to calculate upper limits on combinations of the SME coefficients while the other two are used to constrain the SME coefficients directly. Despite not seeing any indication of LV in the T2K near detectors, the upper limits provided are useful for the theoretical field to continue improving theories which include LV and CPTV.Item Open Access An algorithm for modular decomposition based on multiplexes(Colorado State University. Libraries, 2015) Chamania, Pritish, author; McConnell, Ross, advisor; Bohm, Wim, committee member; Hulpke, Alexander, committee memberModular decomposition is instrumental in the the design of algorithms for solving many important graph theory problems. It has been applied towards developing recognition algorithms for many important perfect graph families. It also forms the basis of a number of efficient algorithms for solving combinatorial optimization problems on graphs.There are a number of efficient algorithms proposed in literature for computing the modular decomposition. Here we explore an O(n3) modular decomposition algorithm based on the theory of transitive orientation. The algorithm highlights how the problem of finding a transitive orientation is intimately related to that of finding the modular decomposition.Item Open Access An analysis of factors affecting student success in Math 160 calculus for physical scientists I(Colorado State University. Libraries, 2009) Reinholz, Daniel Lee, author; Klopfenstein, Kenneth F., advisor; Gloeckner, Gene William, 1950-, committee member; Hulpke, Alexander, committee memberThe average success rate in MATH 160 Calculus for Physical Scientists I at Colorado State University has been near 60% for at least the past three years. Weak pre-calculus skills are often cited as one of the primary reasons students do not succeed in calculus. To investigate this conjecture we included the ALEKS Preparation for Calculus instructional software as a required component of MATH 160. Despite a perceived decrease in the number of algebra-related questions asked by students, we found no improvement in success rates. We also performed an analysis of other factors in relation to success, such as ACT scores and whether or not students had prior calculus experience. As a result of our investigations we conjecture that difficulty with conceptual thinking is a more significant factor than lack of mostly mechanical pre-calculus skills contributing to non-success in MATH 160.Item Open Access Avoiding singularities during homotopy continuation(Colorado State University. Libraries, 2017) Hodges, Timothy E., author; Bates, Daniel J., advisor; Böhm, A. P., committee member; Hulpke, Alexander, committee member; Peterson, Christopher, committee memberIn numerical algebraic geometry, the goal is to find solutions to a polynomial system F(x1,x2,...xn). This is done through a process called homotopy continuation. During this process, it is possible to encounter areas of ill-conditioning. These areas can cause failure of homotopy continuation or an increase in run time. In this thesis, we formalize where these areas of ill-conditioning can happen, and give a novel method for avoiding them. In addition, future work and possible improvements to the method are proposed. We also report on related developments in the Bertini software package. In addition, we discuss new infrastructure and heuristics for tuning configurations during homotopy continuation.Item Open Access Conjugacy classes of matrix groups over local rings and an application to the enumeration of abelian varieties(Colorado State University. Libraries, 2012) Williams, Cassandra L., author; Achter, Jeffrey, advisor; Eykholt, Richard, committee member; Hulpke, Alexander, committee member; Penttila, Tim, committee memberThe Frobenius endomorphism of an abelian variety over a finite field Fq of dimension g can be considered as an element of the finite matrix group GSp2g(Z/lr). The characteristic polynomial of such a matrix defines a union of conjugacy classes in the group, as well as a totally imaginary number field K of degree 2g over Q. Suppose g = 1 or 2. We compute the proportion of matrices with a fixed characteristic polynomial by first computing the sizes of conjugacy classes in GL2(Z/lr) and GSp4(Z/lr. Then we use an equidistribution assumption to show that this proportion is related to the number of abelian varieties over a finite field with complex multiplication by the maximal order of K via a theorem of Everett Howe.Item Open Access Counting isogeny classes of Drinfeld modules over finite fields via Frobenius distributions(Colorado State University. Libraries, 2024) Bray, Amie M., author; Achter, Jeffrey, advisor; Gillespie, Maria, committee member; Hulpke, Alexander, committee member; Pallickara, Shrideep, committee member; Pries, Rachel, committee memberClassically, the size of an isogeny class of an elliptic curve -- or more generally, a principally polarized abelian variety -- over a finite field is given by a suitable class number. Gekeler expressed the size of an isogeny class of an elliptic curve over a prime field in terms of a product over all primes of local density functions. These local density functions are what one might expect given a random matrix heuristic. In his proof, Gekeler shows that the product of these factors gives the size of an isogeny class by appealing to class numbers of imaginary quadratic orders. Achter, Altug, Garcia, and Gordon generalized Gekeler's product formula to higher dimensional abelian varieties over prime power fields without the calculation of class numbers. Their proof uses the formula of Langlands and Kottwitz that expresses the size of an isogeny class in terms of adelic orbital integrals. This dissertation focuses on the function field analog of the same problem. Due to Laumon, one can express the size of an isogeny class of Drinfeld modules over finite fields via adelic orbital integrals. Meanwhile, Gekeler proved a product formula for rank two Drinfeld modules using a similar argument to that for elliptic curves. We generalize Gekeler's formula to higher rank Drinfeld modules by the direct comparison of Gekeler-style density functions with orbital integralsItem Open Access Dynamic representation of consecutive-ones matrices and interval graphs(Colorado State University. Libraries, 2015) Springer, William M., II, author; McConnell, Ross M., advisor; Ray, Indrajit, committee member; Bohm, Wim, committee member; Hulpke, Alexander, committee memberWe give an algorithm for updating a consecutive-ones ordering of a consecutive-ones matrix when a row or column is added or deleted. When the addition of the row or column would result in a matrix that does not have the consecutive-ones property, we return a well-known minimal forbidden submatrix for the consecutive-ones property, known as a Tucker submatrix, which serves as a certificate of correctness of the output in this case, in O(n log n) time. The ability to return such a certificate within this time bound is one of the new contributions of this work. Using this result, we obtain an O(n) algorithm for updating an interval model of an interval graph when an edge or vertex is added or deleted. This matches the bounds obtained by a previous dynamic interval-graph recognition algorithm due to Crespelle. We improve on Crespelle's result by producing an easy-to-check certificate, known as a Lekkerkerker-Boland subgraph, when a proposed change to the graph results in a graph that is not an interval graph. Our algorithm takes O(n log n) time to produce this certificate. The ability to return such a certificate within this time bound is the second main contribution of this work.Item Open Access Generalizations of comparability graphs(Colorado State University. Libraries, 2022) Xu, Zhisheng, author; McConnell, Ross, advisor; Ortega, Francisco, committee member; Cutler, Harvey, committee member; Hulpke, Alexander, committee memberIn rational decision-making models, transitivity of preferences is an important principle. In a transitive preference, one who prefers x to y and y to z must prefer x to z. Many preference relations, including total order, weak order, partial order, and semiorder, are transitive. As a preference which is transitive yet not all pairs of elements are comparable, partial orders have been studied extensively. In graph theory, a comparability graph is an undirected graph which connects all comparable elements in a partial order. A transitive orientation is an assignment of direction to every edge so that the resulting directed graph is transitive. A graph is transitive if there is such an assignment. Comparability graphs are a class of graphs where clique, coloring, and many other optimization problems are solved by polynomial algorithms. It also has close connections with other classes of graphs, such as interval graphs, permutation graphs, and perfect graphs. In this dissertation, we define new measures for transitivity to generalize comparability graphs. We introduce the concept of double threshold digraphs together with a parameter λ which we define as our degree of transitivity. We also define another measure of transitivity, β, as the longest directed path such that there is no edge from the first vertex to the last vertex. We present approximation algorithms and parameterized algorithms for optimization problems and demonstrate that they are efficient for "almost-transitive" preferences.Item Open Access Generalized RSK for enumerating projective maps from n-pointed curves(Colorado State University. Libraries, 2022) Reimer-Berg, Andrew, author; Gillespie, Maria, advisor; Ghosh, Sudipto, committee member; Hulpke, Alexander, committee member; Shoemaker, Mark, committee memberSchubert calculus has been studied since the 1800s, ever since the mathematician Hermann Schubert studied the intersections of lines and planes. Since then, it has grown to have a plethora of connections to enumerative geometry and algebraic combinatorics alike. These connections give us a way of using Schubert calculus to translate geometric problems into combinatorial ones, and vice versa. In this thesis, we define several combinatorial objects known as Young tableaux, as well as the well-known RSK correspondence between pairs of tableaux and sequences. We also define the Grassmannian space, as well as the Schubert cells that live inside it. Then, we describe how Schubert calculus and the Littlewood-Richardson rule allow us to turn problems of intersecting geometric spaces into ones of counting Young tableaux with particular characteristics. We give a combinatorial proof of a recent geometric result of Farkas and Lian on linear series on curves with prescribed incidence conditions. The result states that the expected number of degree-d morphisms from a general genus g, n-marked curve C to Pr, sending the marked points on C to specified general points in Pr, is equal to (r + 1)g for sufficiently large d. This computation may be rephrased as an intersection problem on Grassmannians, which has a natural combinatorial interpretation in terms of Young tableaux by the classical Littlewood-Richardson rule. We give a bijection, generalizing the well-known RSK correspondence, between the tableaux in question and the (r + 1)-ary sequences of length g, and we explore our bijection's combinatorial properties. We also apply similar methods to give a combinatorial interpretation and proof of the fact that, in the modified setting in which r = 1 and several marked points map to the same point in P1, the number of morphisms is still 2g for sufficiently large d.Item Open Access Hochbegabte kinder - das unterdrückte genie -- was treibt Hans Giebenrath unter das rad?: eine neuere perspektive zu Herman Hesses Unterm Rad, in bezug auf die idee ,das lernen als strafe'(Colorado State University. Libraries, 2013) Riggs, Kaysha, author; Hughes, Jolyon, advisor; Kirby, Rachel, committee member; Hulpke, Alexander, committee memberThe current discussion on Hermann Hesse's 1906 book, Unterm Rad, leaves many open-ended questions. Because the storyline so closely follows Hermann Hesse's personal biography, it obfuscates his authorial intentions and makes it difficult for scholars to differentiate between the two. Many critics also claim that the correlations between Unterm Rad's protagonist, Hans Giebenrath, and Hesse's personal life have actually stagnated later research about the book, as discussions always circle back to Hesse's personal struggles in the Prussian school system. This thesis, although acknowledging the similarities to Hesse's personal timeline, aims to frame the book in historical context in order to discuss its importance in a literary context. This thesis begins by analyzing the norm of educational methods in the early nineteenth century, and establishes that they are strongly based on a long tradition of child rearing by using force. This can be traced back to some accounts from 1752, and are based on a history of bourgeois childrearing. The headmaster and pastor's treatment of Hans in Unterm Rad clearly demonstrate the force and suppression of new ideas, used as modes of teaching to ensure students conformed to societal norms: this is historically reconcilable. Hesse's fictional story is thus the ideal basis for an analysis of childrearing methods used during that time. In order to effectively introduce a new perspective on the discussion, this paper uses the New Historicism approach and begins with Roland Barthes's theory of authorial intention. It analyses the text within the constraints of The Death of the Author, and continues with Michel Foucault's What is an Author? The goal is to evaluate what Unterm Rad says about the child rearing at the turn of the century in southern Germany, particularly for gifted children, and how it can be applied to what is already known from a historical standpoint. This idea is then applied from Hans Giebenrath's point of view to German psychologist Katharina Rutschky's concept of "Schwarze Pädagogik" or "Black Pedagogy" and her theories of suppression. This idea is further supplemented by Alice Millers research on childrearing, in relation to Hans's experience at the school in Maulbronn.Item Open Access Longer nilpotent series for classical unipotent groups(Colorado State University. Libraries, 2013) Maglione, Josh, author; Wilson, James, advisor; Hulpke, Alexander, committee member; Boucher, Christina, committee memberWe compute the adjoint series for the unipotent subgroup, U, of the Chevalley group Ad(Zp). The adjoint series of U has length d2/4 + d/2 + θ(1), whose factors have order equal to either p or p2, whereas the lower central series of U has length d + 1, whose factors have order equal to pO(d). We provide an algorithm for computing the adjoint series.Item Open Access Mathematical models for HIV-1 viral capsid structure and assembly(Colorado State University. Libraries, 2015) Sadre-Marandi, Farrah, author; Liu, Jiangguo, advisor; Tavener, Simon, advisor; Chen, Chaoping, committee member; Hulpke, Alexander, committee member; Zhou, Yongcheng, committee memberHIV-1 (human immunodeficiency virus type 1) is a retrovirus that causes the acquired immunodeficiency syndrome (AIDS). This infectious disease has high mortality rates, encouraging HIV-1 to receive extensive research interest from scientists of multiple disciplines. Group-specific antigen (Gag) polyprotein precursor is the major structural component of HIV. This protein has 4 major domains, one of which is called the capsid (CA). These proteins join together to create the peculiar structure of HIV-1 virions. It is known that retrovirus capsid arrangements represent a fullerene-like structure. These caged polyhedral arrangements are built entirely from hexamers (6 joined proteins) and exactly 12 pentamers (5 proteins) by the Euler theorem. Different distributions of these 12 pentamers result in icosahedral, tubular, or the unique HIV-1 conical shaped capsids. In order to gain insight into the distinctive structure of the HIV capsid, we develop and analyze mathematical models to help understand the underlying biological mechanisms in the formation of viral capsids. The pentamer clusters introduce declination and hence curvature on the capsids. The HIV-1 capsid structure follows a (5,7)-cone pattern, with 5 pentamers in the narrow end and 7 in the broad end. We show that the curvature concentration at the narrow end is about five times higher than that at the broad end. This leads to a conclusion that the narrow end is the weakest part on the HIV-1 capsid and a conjecture that “the narrow end closes last during maturation but opens first during entry into a host cell.” Models for icosahedral capsids are established and well-received, but models for tubular and conical capsids need further investigation. We propose new models for the tubular and conical capsid based on an extension of the Caspar-Klug quasi-equivalence theory. In particular, two and three generating vectors are used to characterize respectively the lattice structures of tubular and conical capsids. Comparison with published HIV-1 data demonstrates a good agreement of our modeling results with experimental data. It is known that there are two stages in the viral capsid assembly: nucleation (formation of a nuclei: hexamers) and elongation (building the closed shell). We develop a kinetic model for modeling HIV-1 viral capsid nucleation using a 6-species dynamical system. Numerical simulations of capsid protein (CA) multimer concentrations closely match experimental data. Sensitivity and elasticity analysis of CA multimer concentrations with respect to the association and disassociation rates further reveals the importance of CA dimers in the nucleation stage of viral capsid self-assembly.Item Open Access Measurement of νμ-induced charged-current single π⁺ production on H₂O(Colorado State University. Libraries, 2015) Assylbekov, Shamil M., author; Wilson, Robert J., advisor; Toki, Walter, committee member; Harton, John, committee member; Berger, Bruce, committee member; Hulpke, Alexander, committee memberT2K is an international collaboration that has constructed an experiment in Japan to investigate the properties of the neutrino. It consists of two near detectors, ND280 and INGRID, and a far detector - Super-Kamiokande. ND280 has multiple sub-detectors with the π⁰ detector (PØD) being of most importance to this analysis. This work describes the first measurement of neutrino cross section for charged-current single positively charged pion (CC1π⁺) interaction channel on water (H₂O) using the PØD as target and detector. The PØD detector has been taking neutrino interaction data since 2009 in configurations with and without an integrated water target. Using a statistical water-in/water-out event rate subtraction, a measurement of the νμ-induced CC1π⁺ cross section on water is reported to be ‹σ› = 1.10 x 10⁻³⁹ +0.39·10⁻³⁹/-0.36·10⁻³⁹ cm², where the result is provided in the form of a single-bin cross section integrated over the entire T2K neutrino energy range. The measurement is based on a sample of 2,703 events selected from beam runs of $2.64 x 10²⁰ protons-on-target (POT) with the PØD water-in configuration, and 2,187 events selected from $3.71 x 10²⁰ POT with the water-out configuration. The corresponding Monte Carlo simulation predicted numbers of background events to be 1,387.2 and 1,046.0 for the water-in and water-out detector configurations, respectively. The accuracy of the result is dominated by flux and cross section models uncertainties. The data favors a systematically smaller cross section when compared to the model but within the uncertainties it is consistent with the Monte Carlo simulation prediction of $1.26 x 10⁻³⁹cm². The result, its significance, and the strategy for future CC1π⁺ measurements are discussed in conclusion.Item Open Access Modular decomposition of undirected graphs(Colorado State University. Libraries, 2020) Esparza, Adrian E., author; McConnell, Ross, advisor; Pallickara, Sangmi, committee member; Hulpke, Alexander, committee memberGraphs found in nature tend to have structure and interesting compositions that allow for compression of the information and thoughtful analysis of the relationships between vertices. Modular decomposition has been studied since 1967 [1]. Modular decomposition breaks a graph down into smaller components that, from the outside, are inseparable. In doing so, modules provide great potential to better study problems from genetics to compression. This paper describes the implementation of a practical algorithm to take a graph and decompose it into it modules in O(n+m log(n)) time. In the implementation of this algorithm, each sub-problem was solved using object oriented design principles to allow a reader to extract the individual objects and turn them to other problems of practical interest. The purpose of this paper is to provide the reader with the tools to easily compute: modular decomposition of an undirected graph, partition an undirected graph, depth first search on a directed partially complemented graph, and stack operations with a complement stack. The provided implementation of these problems all compute within the time bound discussed above, or even faster for several of the sub problems.Item Open Access Multiplicities and equivariant cohomology(Colorado State University. Libraries, 2010) Lynn, Rebecca E., author; Duflot, Jeanne, advisor; Miranda, Rick, committee member; Hulpke, Alexander, committee member; Iyer, Hariharan K., committee memberThe aim of this paper is to address the following problem: how to relate the algebraic definitions and computations of multiplicity from commutative algebra to computations done in the cohomology theory of group actions on manifolds. Specifically, this paper is concerned with applications of commutative algebra to the study of cohomology rings arising from group actions on manifolds, in the way that Quillen initiated. This paper synthesizes two distinct areas of pure mathematics (commutative algebra and cohomology theory) and two ways of computing multiplicities in order to link them. In order to accomplish this task, a discussion of commutative algebra will be followed by a discussion of cohomology theory. A link between commutative algebra and cohomology theory will be presented, followed by its application to a significant example. In commutative algebra, we discuss graded rings, Pioncaré Series, dimension, and multiplicities. Whereas the theory for multiplicities has been developed for local rings, we give an exposition of the theory for graded rings. Several definitions for dimension will be presented, and it will be proven that all of these distinct definitions are equal. The basic properties of multiplicities will be introduced, and a brief discussion of a classical multiplicity in commutative algebra, the Samuel multiplicity, will be presented. Then, Maiorana's C-multiplicity will be defined, and a relationship between all of these multiplicities will be observed. In cohomology theory, we address smooth actions of finite groups on manifolds. As a part of this study in cohomology theory, we will consider group actions on topological spaces and the Borel construction (equivariant cohomology), completing this part of the paper with a discussion of smooth (or differentiable) actions, setting some notation necessary for our discussion of Maiorana's results, which inspire some of our main theorems, but on which we do not rely in this dissertation. Following the treatments of commutative algebra and cohomology theory, we present one of Quillen's main results without proof, linking these two distinct areas of pure mathematics. Quillen's work results in a formula for finding the multiplicity of the equivariant cohomology of a compact G-manifold with G a compact Lie group. We apply these results to the compact G-manifold U/S, where G (a compact Lie group) is embedded in a unitary group U=U(n) and S=S(n) is the diagonal p-torus of rank n in U(n), resulting in a nice topological formula for computing multiplicities. Finally, we end the paper with a proposal for future research.Item Open Access New constructions of strongly regular graphs(Colorado State University. Libraries, 2014) Lane-Harvard, Elizabeth, author; Penttila, Tim, advisor; Gloeckner, Gene, committee member; Hulpke, Alexander, committee member; Peterson, Chris, committee memberThere are many open problems concerning strongly regular graphs: proving non-existence for parameters where none are known; proving existence for parameters where none are known; constructing more parameters where examples are already known. The work addressed in this dissertation falls into the last two categories. The methods used involve symmetry, geometry, and experimentation in computer algebra systems. In order to construct new strongly regular graphs, we rely heavily on objects found in finite geometry, specifically two intersection sets and generalized quadrangles, in which six independent successes occur. New two intersection sets are constructed in finite Desarguesian projective planes whose strongly regular graph parameters correspond to previously unknown and known ones. An infinite family of new two intersection sets is also constructed in finite projective spaces in 5 dimensions. The infinite family of strongly regular graphs have the same parameters as Paley graphs. Next, using the point graph of the classical GQ H(3,q2), q even, a new infinite family of strongly regular graphs is constructed. Then we generalize three infinite families of strongly regular graphs from large arcs in Desarguesian projective planes to the non-Desarguesian case. Finally, a construction of strongly regular graphs from ovoids of generalized quadrangles of Godsil and Hensel is applied to non-classical generalized quadrangles to obtain new families of strongly regular graphs.Item Open Access Number of 4-cycles of the genus 2 superspecial isogeny graph(Colorado State University. Libraries, 2024) Sworski, Vladimir P., author; Pries, Rachel, advisor; Hulpke, Alexander, committee member; Rajopadhye, Sanjay, committee member; Shoemaker, Mark, committee memberThe genus 2 superspecial degree-2 isogeny graph over a finite field of size p2 is a network graph whose vertices are constructed from genus 2 superspecial curves and whose edges are the degree 2 isogenies between them. Flynn and Ti discovered 4-cycles in the graph, which pose problems for applications in cryptography. Florit and Smith constructed an atlas which describes what the neighborhood of each vertex looks like. We wrote a program in SageMath that can calculate neighborhoods of these graphs for small primes. Much of our work is motivated by these computations. We examine the prevalence of 4-cycles in the graph and, motivated by work of Arpin, et al. in the genus 1 situation, in the subgraph called the spine. We calculate the number of 4-cycles that pass through vertices of 12 of the 14 kinds possible. This also resulted in constructing the neighborhood of all vertices two steps or fewer away for three special types of curves. We also establish conjectures about the number of vertices and cycles in small neighborhoods of the spine.Item Open Access On approximating transitivity and tractability of graphs(Colorado State University. Libraries, 2016) Manchanda, Saksham, author; McConnell, Ross, advisor; Ray, Indrakshi, advisor; Hulpke, Alexander, committee memberIn the general case, in a simple, undirected graph, the problems of finding the largest clique, minimum colouring, maximum independent set and minimum vertex cover are NP-hard. But, there exists some families of graphs, called perfect graphs, where these problems become tractable. One particular class of perfect graphs are the the underlying undirected graphs of transitive digraphs- called comparability graphs. We define a new parameter β to approximate the intransitivity of a given graph. We also use β to give a measure of complexity of finding the largest clique. As β gets worse, the complexity of finding the largest clique quickly grows to exponential times. We also give approximation algorithms that scale with β for all our NP-hard problems. The β measure of a graph can be computed in O(mn), therefore, β can be considered a measure of how tractable these problems are in a graph.