Browsing by Author "Peterson, Chris, advisor"
Now showing 1 - 20 of 20
- Results Per Page
- Sort Options
Item Open Access A geometric data analysis approach to dimension reduction in machine learning and data mining in medical and biological sensing(Colorado State University. Libraries, 2017) Emerson, Tegan Halley, author; Kirby, Michael, advisor; Peterson, Chris, advisor; Nyborg, Jennifer, committee member; Chenney, Margaret, committee memberGeometric data analysis seeks to uncover and leverage structure in data for tasks in machine learning when data is visualized as points in some dimensional, abstract space. This dissertation considers data which is high dimensional with respect to varied notions of dimension. Algorithms developed herein seek to reduce or estimate dimension while preserving the ability to perform a specific task in detection, identification, or classification. In some of the applications the only property considered important to be preserved under dimension reduction is the ability to perform the indicated machine learning task while in others strictly geometric relationships between data points are required to be preserved or minimized. First presented is the development of a numerical representation of images of rare circulating cells in immunofluorescent images. This representation is paired with a support vector machine and is able to identify differentiating cell structure between cell populations under consideration. Moreover, this differentiating information can be visualized through inversion of the representation and was found to be consistent with classification criterion used by clinically trained pathologists. Considered second is the task of identification and tracking of aerosolized bioagents via a multispectral lidar system. A nonnegative matrix factorization problem arised out of this data mining task which can be solved in several ways including a ℓ1-norm regularized, convex but nondifferentiable optimization problem. Exisiting methodologies achieve excellent results when internal matrix factor dimension is known but fail or can be computationally prohibitive when this dimension is not known. A modified optimization problem is proposed that may help reveal the appropriate internal factoring dimension based on the sparsity of averages of nonnegative values. Third, we present an algorithmic framework for reducing dimension in the linear mixing model. The mean-squared error of a statistical estimator of a component of the linear mixing model can be considered as a function of the rank of different estimating matrices. We seek to minimize mean squared error as a function of the rank of the appropriate estimating matrix and yield interesting order determination rules and improved results, relative to full rank counterparts, in applications in matched subspace detection and generalized modal analysis. Finally, the culminating work of this dissertation explores the existence of nearly isometric, dimension reducing mappings between special manifolds characterized by different dimensions. Understanding the analogous problem between Euclidean spaces provides insights into potential challenges and pitfalls one could encounter in proving the existence of such mappings. Most significant of the contributions is the statement and proof of a theorem establishing a connection between packing problems on Grassmannian manifolds and nearly isometric mappings between Grassmannians. The frameworks and algorithms constructed and developed in this doctoral research consider multiple manifestations of the notion of dimension. Across applications arising from varied areas of medical and biological sensing we have shown there to be great benefits to taking a geometric perspective on challenges in machine learning and data mining.Item Open Access Comparing sets of data sets on the Grassmann and flag manifolds with applications to data analysis in high and low dimensions(Colorado State University. Libraries, 2020) Ma, Xiaofeng, author; Kirby, Michael, advisor; Peterson, Chris, advisor; Chong, Edwin, committee member; Scharf, Louis, committee member; Shonkwiler, Clayton, committee memberThis dissertation develops numerical algorithms for comparing sets of data sets utilizing shape and orientation of data clouds. Two key components for "comparing" are the distance measure between data sets and correspondingly the geodesic path in between. Both components will play a core role which connects two parts of this dissertation, namely data analysis on the Grassmann manifold and flag manifold. For the first part, we build on the well known geometric framework for analyzing and optimizing over data on the Grassmann manifold. To be specific, we extend the classical self-organizing mappings to the Grassamann manifold to visualize sets of high dimensional data sets in 2D space. We also propose an optimization problem on the Grassmannian to recover missing data. In the second part, we extend the geometric framework to the flag manifold to encode the variability of nested subspaces. There we propose a numerical algorithm for computing a geodesic path and distance between nested subspaces. We also prove theorems to show how to reduce the dimension of the algorithm for practical computations. The approach is shown to have advantages for analyzing data when the number of data points is larger than the number of features.Item Open Access Cracking open the black box: a geometric and topological analysis of neural networks(Colorado State University. Libraries, 2024) Cole, Christina, author; Kirby, Michael, advisor; Peterson, Chris, advisor; Cheney, Margaret, committee member; Draper, Bruce, committee memberDeep learning is a subfield of machine learning that has exploded in recent years in terms of publications and commercial consumption. Despite their increasing prevalence in performing high-risk tasks, deep learning algorithms have outpaced our understanding of them. In this work, we hone in on neural networks, the backbone of deep learning, and reduce them to their scaffolding defined by polyhedral decompositions. With these decompositions explicitly defined for low-dimensional examples, we utilize novel visualization techniques to build a geometric and topological understanding of them. From there, we develop methods of implicitly accessing neural networks' polyhedral skeletons, which provide substantial computational and memory savings compared to those requiring explicit access. While much of the related work using neural network polyhedral decompositions is limited to toy models and datasets, the savings provided by our method allow us to use state-of-the-art neural networks and datasets in our analyses. Our experiments alone demonstrate the viability of a polyhedral view of neural networks and our results show its usefulness. More specifically, we show that the geometry that a polyhedral decomposition imposes on its neural network's domain contains signals that distinguish between original and adversarial images. We conclude our work with suggested future directions. Therefore, we (1) contribute toward closing the gap between our use of neural networks and our understanding of them through geometric and topological analyses and (2) outline avenues for extensions upon this work.Item Open Access Exploiting geometry, topology, and optimization for knowledge discovery in big data(Colorado State University. Libraries, 2013) Ziegelmeier, Lori Beth, author; Kirby, Michael, advisor; Peterson, Chris, advisor; Liu, Jiangguo (James), committee member; Draper, Bruce, committee memberIn this dissertation, we consider several topics that are united by the theme of topological and geometric data analysis. First, we consider an application in landscape ecology using a well-known vector quantization algorithm to characterize and segment the color content of natural imagery. Color information in an image may be viewed naturally as clusters of pixels with similar attributes. The inherent structure and distribution of these clusters serves to quantize the information in the image and provides a basis for classification. A friendly graphical user interface called Biological Landscape Organizer and Semi-supervised Segmenting Machine (BLOSSM) was developed to aid in this classification. We consider four different choices for color space and five different metrics in which to analyze our data, and results are compared. Second, we present a novel topologically driven clustering algorithm that blends Locally Linear Embedding (LLE) and vector quantization by mapping color information to a lower dimensional space, identifying distinct color regions, and classifying pixels together based on both a proximity measure and color content. It is observed that these techniques permit a significant reduction in color resolution while maintaining the visually important features of images. Third, we develop a novel algorithm which we call Sparse LLE that leads to sparse representations in local reconstructions by using a data weighted 1-norm regularization term in the objective function of an optimization problem. It is observed that this new formulation has proven effective at automatically determining an appropriate number of nearest neighbors for each data point. We explore various optimization techniques, namely Primal Dual Interior Point algorithms, to solve this problem, comparing the computational complexity for each. Fourth, we present a novel algorithm that can be used to determine the boundary of a data set, or the vertices of a convex hull encasing a point cloud of data, in any dimension by solving a quadratic optimization problem. In this problem, each point is written as a linear combination of its nearest neighbors where the coefficients of this linear combination are penalized if they do not construct a convex combination, revealing those points that cannot be represented in this way, the vertices of the convex hull containing the data. Finally, we exploit the relatively new tool from topological data analysis, persistent homology, and consider the use of vector bundles to re-embed data in order to improve the topological signal of a data set by embedding points sampled from a projective variety into successive Grassmannians.Item Open Access Grassmann, Flag, and Schubert varieties in applications(Colorado State University. Libraries, 2017) Marrinan, Timothy P., author; Kirby, Michael, advisor; Peterson, Chris, advisor; Azimi-Sadjadi, Mahmood R., committee member; Bates, Dan, committee member; Draper, Bruce, committee memberThis dissertation develops mathematical tools for signal processing and pattern recognition tasks where data with the same identity is assumed to vary linearly. We build on the growing canon of techniques for analyzing and optimizing over data on Grassmann manifolds. Specifically we expand on a recently developed method referred to as the flag mean that finds an average representation for a collection data that consists of linear subspaces of possibly different dimensions. When prior knowledge exists about relationships between these data, we show that a point analogous to the flag mean can be found as an element of a Schubert variety to incorporates this theoretical information. This domain restriction relates closely to a recent result regarding point-to-set functions. This restricted average along with a property of the flag mean that prioritizes weak but common information, leads to practical applications of the flag mean such as chemical plume detection in long-wave infrared hyperspectral videos, and a modification of the well-known diffusion map for adaptively visualizing data relationships in 2-dimensions.Item Open Access Group action on neighborhood complexes of Cayley graphs(Colorado State University. Libraries, 2014) Hughes, Justin, author; Hulpke, Alexander, advisor; Peterson, Chris, advisor; Berger, Bruce, committee member; Cavalieri, Renzo, committee member; Wilson, James, committee memberGiven G a group generated by S ≐ {g1, …, gn}, one can construct the Cayley Graph Cayley (G,S). Given a distance set D ⊂ Z≥0 and Cayley (G,S) one can construct a D-neighborhood complex. This neighborhood complex is a simplicial complex to which we can associate a chain complex. The group G acts on this chain complex and this leads to an action on the homology of the chain complex. These group actions decompose into several representations of G. This thesis uses tools from group theory, representation theory, homo-logical algebra, and topology to further our understanding of the interplay between generated groups (i.e. a group together with a set of generators), corresponding representations on their associated D-neighborhood complexes, and the homology of the D-neighborhood complexes.Item Open Access Independence complexes of finite groups(Colorado State University. Libraries, 2021) Pinckney, Casey M., author; Hulpke, Alexander, advisor; Peterson, Chris, advisor; Adams, Henry, committee member; Neilson, James, committee memberUnderstanding generating sets for finite groups has been explored previously via the generating graph of a group, where vertices are group elements and edges are given by pairs of group elements that generate the group. We generalize this idea by considering minimal generating sets (with respect to inclusion) for subgroups of finite groups. These form a simplicial complex, which we call the independence complex. The vertices of the independence complex are nonidentity group elements and the faces of size k correspond to minimal generating sets of size k. We give a complete characterization via constructive algorithms, together with enumeration results, for the independence complexes of cyclic groups whose order is a squarefree product of primes, finite abelian groups whose order is a product of powers of distinct primes, and the nonabelian class of semidirect products Cp1p3…p2n-1 rtimes Cp2p4…p2n where p1,p2,…,p2n are distinct primes with p2i-1 > p2i for all 1 ≤ i ≤ n. In the latter case, we introduce a tool called a combinatorial diagram, which is a multipartite simplicial complex under certain numerical and minimal covering conditions. Combinatorial diagrams seem to be an interesting area of study on their own. We also include GAP and Polymake code which generates the facets of any (small enough) finite group, as well as visualize the independence complexes in small dimensions.Item Open Access k-simplex volume optimizing projection algorithms for high-dimensional data sets(Colorado State University. Libraries, 2021) Stiverson, Shannon J., author; Kirby, Michael, advisor; Peterson, Chris, advisor; Adams, Henry, committee member; Hess, Ann, committee memberMany applications produce data sets that contain hundreds or thousands of features, and consequently sit in very high dimensional space. It is desirable for purposes of analysis to reduce the dimension in a way that preserves certain important properties. Previous work has established conditions necessary for projecting data into lower dimensions while preserving pairwise distances up to some tolerance threshold, and algorithms have been developed to do so optimally. However, although similar criteria for projecting data into lower dimensions while preserving k-simplex volumes has been established, there are currently no algorithms seeking to optimally preserve such embedded volumes. In this work, two new algorithms are developed and tested: one which seeks to optimize the smallest projected k-simplex volume, and another which optimizes the average projected k-simplex volume.Item Open Access Linear models, signal detection, and the Grassmann manifold(Colorado State University. Libraries, 2014) Schwickerath, Anthony Norbert, author; Kirby, Michael, advisor; Peterson, Chris, advisor; Scharf, Louis, committee member; Eykholt, Richard, committee memberStandard approaches to linear signal detection, reconstruction, and model identification problems, such as matched subspace detectors (MF, MDD, MSD, and ACE) and anomaly detectors (RX) are derived in the ambient measurement space using statistical methods (GLRT, regression). While the motivating arguments are statistical in nature, geometric interpretations of the test statistics are sometimes developed after the fact. Given a standard linear model, many of these statistics are invariant under orthogonal transformations, have a constant false alarm rate (CFAR), and some are uniformly most powerful invariant (UMPI). These properties combined with the simplicity of the tests have led to their widespread use. In this dissertation, we present a framework for applying real-valued functions on the Grassmann manifold in the context of these same signal processing problems. Specifically, we consider linear subspace models which, given assumptions on the broadband noise, correspond to Schubert varieties on the Grassmann manifold. Beginning with increasing (decreasing) or Schur-convex (-concave) functions of principal angles between pairs of points, of which the geodesic and chordal distances (or probability distribution functions) are examples, we derive the associated point-to-Schubert variety functions and present signal detection and reconstruction algorithms based upon this framework. As a demonstration of the framework in action, we implement an end-to-end system utilizing our framework and algorithms. We present results of this system processing real hyperspectral images.Item Open Access Low rank representations of matrices using nuclear norm heuristics(Colorado State University. Libraries, 2014) Osnaga, Silvia Monica, author; Kirby, Michael, advisor; Peterson, Chris, advisor; Bates, Dan, committee member; Wang, Haonan, committee memberThe pursuit of low dimensional structure from high dimensional data leads in many instances to the finding the lowest rank matrix among a parameterized family of matrices. In its most general setting, this problem is NP-hard. Different heuristics have been introduced for approaching the problem. Among them is the nuclear norm heuristic for rank minimization. One aspect of this thesis is the application of the nuclear norm heuristic to the Euclidean distance matrix completion problem. As a special case, the approach is applied to the graph embedding problem. More generally, semi-definite programming, convex optimization, and the nuclear norm heuristic are applied to the graph embedding problem in order to extract invariants such as the chromatic number, Rn embeddability, and Borsuk-embeddability. In addition, we apply related techniques to decompose a matrix into components which simultaneously minimize a linear combination of the nuclear norm and the spectral norm. In the case when the Euclidean distance matrix is the distance matrix for a complete k-partite graph it is shown that the nuclear norm of the associated positive semidefinite matrix can be evaluated in terms of the second elementary symmetric polynomial evaluated at the partition. We prove that for k-partite graphs the maximum value of the nuclear norm of the associated positive semidefinite matrix is attained in the situation when we have equal number of vertices in each set of the partition. We use this result to determine a lower bound on the chromatic number of the graph. Finally, we describe a convex optimization approach to decomposition of a matrix into two components using the nuclear norm and spectral norm.Item Open Access Mean variants on matrix manifolds(Colorado State University. Libraries, 2012) Marks, Justin D., author; Peterson, Chris, advisor; Kirby, Michael, advisor; Bates, Dan, committee member; Anderson, Chuck, committee memberThe geometrically elegant Stiefel and Grassmann manifolds have become organizational tools for data applications, such as illumination spaces for faces in digital photography. Modern data analysis involves increasingly large-scale data sets, both in terms of number of samples and number of features per sample. In circumstances such as when large-scale data has been mapped to a Stiefel or Grassmann manifold, the computation of mean representatives for clusters of points on these manifolds is a valuable tool. We derive three algorithms for determining mean representatives for a cluster of points on the Stiefel manifold and the Grassmann manifold. Two algorithms, the normal mean and the projection mean, follow the theme of the Karcher mean, relying upon inversely related maps that operate between the manifold and the tangent bundle. These maps are informed by the geometric definition of the tangent bundle and the normal bundle. From the cluster of points, each algorithm exploits these maps in a predictor/corrector loop until converging, with prescribed tolerance, to a fixed point. The fixed point acts as the normal mean representative, or projection mean representative, respectively, of the cluster. This method shares its principal structural characteristics with the Karcher mean, but utilizes a distinct pair of inversely related maps. The third algorithm, called the flag mean, operates in a context comparable to a generalized Grassmannian. It produces a mean subspace of arbitrary dimension. We provide applications and discuss generalizations of these means to other manifolds.Item Open Access Metric thickenings and group actions(Colorado State University. Libraries, 2020) Heim, Mark T., author; Adams, Henry, advisor; Peterson, Chris, advisor; Neilson, James, committee memberLet G be a group acting properly and by isometries on a metric space X; it follows that the quotient or orbit space X/G is also a metric space. We study the Vietoris–Rips and Čech complexes of X/G. Whereas (co)homology theories for metric spaces let the scale parameter of a Vietoris–Rips or Čech complex go to zero, and whereas geometric group theory requires the scale parameter to be sufficiently large, we instead consider intermediate scale parameters (neither tending to zero nor to infinity). As a particular case, we study the Vietoris–Rips and Čech thickenings of projective spaces at the first scale parameter where the homotopy type changes.Item Open Access Normalizing Parseval frames by gradient descent(Colorado State University. Libraries, 2024) Caine, Anthony, author; Peterson, Chris, advisor; Shonkwiler, Clayton, advisor; Adams, Henry, committee member; Neilson, Jamie, committee memberEquinorm Parseval Frames (ENPFs) are collections of equal-length vectors that form Parseval frames, meaning they are spanning sets that satisfy a version of the Parseval identity. As such, they have many of the desirable features of orthonormal bases for signal processing and data representation, but provide advantages over orthonormal bases in settings where redundancy is important to provide robustness to data loss. We give three methods for normalizing Parseval frames: that is, flowing a generic Parseval frame to an ENPF. This complements prior work showing that equal-norm frames could be "Parsevalized" and potentially provides new avenues for attacking the Paulsen problem, which seeks sharp upper bounds on the distance to the space of ENPFs in terms of norm and spectral data. This work is based on ideas from symplectic geometry and geometric invariant theory.Item Open Access Object and action detection methods using MOSSE filters(Colorado State University. Libraries, 2012) Arn, Robert T., author; Kirby, Michael, advisor; Peterson, Chris, advisor; Draper, Bruce, committee memberIn this thesis we explore the application of the Minimum Output Sum of Squared Error (MOSSE) filter to object detection in images as well as action detection in video. We exploit the properties of the Fourier transform for computing correlations in two and three dimensions. We perform a comprehensive examination of the shape parameters of the desired target response and determine values to optimize the filter performance for specific objects and actions. In addition, we propose the Gaussian Iterative Response (GIR) algorithm and the Multi-Sigma Geometric Mean method to improve the MOSSE filter response on test signals. Also, new detection criteria are investigated and shown to boost the detection accuracy on two well-known data sets.Item Open Access On the formulation and uses of SVD-based generalized curvatures(Colorado State University. Libraries, 2016) Arn, Robert T., author; Kirby, Michael, advisor; Peterson, Chris, advisor; Bates, Dan, committee member; Reiser, Raoul, committee memberIn this dissertation we consider the problem of computing generalized curvature values from noisy, discrete data and applications of the provided algorithms. We first establish a connection between the Frenet-Serret Frame, typically defined on an analytical curve, and the vectors from the local Singular Value Decomposition (SVD) of a discretized time-series. Next, we expand upon this connection to relate generalized curvature values, or curvatures, to a scaled ratio of singular values. Initially, the local singular value decomposition is centered on a point of the discretized time-series. This provides for an efficient computation of curvatures when the underlying curve is known. However, when the structure of the curve is not known, for example, when noise is present in the tabulated data, we propose two modifications. The first modification computes the local singular value decomposition on the mean-centered data of a windowed selection of the time-series. We observe that the mean-center version increases the stability of the curvature estimations in the presence of signal noise. The second modification is an adaptive method for selecting the size of the window, or local ball, to use for the singular value decomposition. This allows us to use a large window size when curvatures are small, which reduces the effects of noise thanks to the use of a large number of points in the SVD, and to use a small window size when curvatures are large, thereby best capturing the local curvature. Overall we observe that adapting the window size to the data, enhances the estimates of generalized curvatures. The combination of these two modifications produces a tool for computing generalized curvatures with reasonable precision and accuracy. Finally, we compare our algorithm, with and without modifications, to existing numerical curvature techniques on different types of data such as that from the Microsoft Kinect 2 sensor. To address the topic of action segmentation and recognition, a popular topic within the field of computer vision, we created a new dataset from this sensor showcasing a pose space skeletonized representation of individuals performing continuous human actions as defined by the MSRC-12 challenge. When this data is optimally projected onto a low-dimensional space, we observed each human motion lies on a distinguished line, plane, hyperplane, etc. During transitions between motions, either the dimension of the optimal subspace significantly, or the trajectory of the curve through pose space nearly reverses. We use our methods of computing generalized curvature values to identify these locations, categorized as either high curvatures or changing curvatures. The geometric characterization of the time-series allows us to segment individual,or geometrically distinct, motions. Finally, using these segments, we construct a methodology for selecting motions to conjoin for the task of action classification.Item Open Access Ramsey regions and simplicial homology tables for graphs(Colorado State University. Libraries, 2008) Frederick, Christopher Austin, author; Peterson, Chris, advisorRamsey Theory is the investigation of edge-colored graphs which force a monochromatic subgraph. We devise a way of breaking certain Ramsey Theory problems into "smaller" pieces so that information about Ramsey Theory can be gained without solving the entire problem, (which is often difficult to solve). Next the work with Ramsey Regions for graphs is translated into the language of hypergraphs. Theorems and techniques are reworked to fit appropriately into the setting of hypergraphs. The work of persistence complex on large data sets is examined in the setting of graphs. Various simplicial complexes can be assigned to a graph. For a given simplicial complex the persistence complex can be constructed, giving a highly detailed graph invariant. Connections between the graph and persistence complex are investigated.Item Open Access Sparse matrix varieties, Daubechies spaces, and good compression regions of Grassmann manifolds(Colorado State University. Libraries, 2024) Collery, Brian, author; Peterson, Chris, advisor; Shonkwiler, Clayton, advisor; Cavalieri, Renzo, committee member; Kirby, Michael, committee member; Pouchet, Louis-Nöel, committee memberThe Grassmann manifold Gr(k, n) is a geometric object whose points parameterize k dimensional subspaces of Rn. The flag manifold is a generalization in that its points parameterize flags of vector spaces in Rn. This thesis concerns applications of the geometry of the Grassmann and flag manifolds, with an emphasis on image compression. As a motivating example, the discrete versions of Daubechies wavelets generate distinguished n-dimensional subspaces of R2n that can be considered as distinguished points on Gr(n, 2n). We show that geodesic paths between "Daubechies points" parameterize families of "good" image compression matrices. Furthermore, we show that these paths lie on a distinguished Schubert cell in the Grassmannian. Inspired by the structure of Daubechies wavelets, we define and explore sparse matrix varieties as a generalization. Keeping in that theme, we are interested in understanding geometric considerations that constrain the "good" compression region of a Grassmann manifold.Item Open Access The D-neighborhood complex of a graph(Colorado State University. Libraries, 2014) Previte, Corrine, author; Peterson, Chris, advisor; Hulpke, Alexander, advisor; Bates, Dan, committee member; Gelfand, Martin, committee memberThe Neighborhood complex of a graph, G, is an abstract simplicial complex formed by the subsets of the neighborhoods of all vertices in G. The construction of this simplicial complex can be generalized to use any subset of graph distances as a means to form the simplices in the associated simplicial complex. Consider a simple graph G with diameter d. Let D be a subset of {0,1,..., d}. For each vertex, u, the D-neighborhood is the simplex consisting of all vertices whose graph distance from u lies in D. The D-neighborhood complex of G, denoted DN(G,D), is the simplicial complex generated by the D-neighborhoods of vertices in G. We relate properties of the graph G with the homology of the chain complex associated to DN(G,D).Item Open Access The numerical algebraic geometry approach to polynomial optimization(Colorado State University. Libraries, 2017) Davis, Brent R., author; Bates, Daniel J., advisor; Peterson, Chris, advisor; Kirby, Michael, committee member; Maciejewski, A. A., committee memberNumerical algebraic geometry (NAG) consists of a collection of numerical algorithms, based on homotopy continuation, to approximate the solution sets of systems of polynomial equations arising from applications in science and engineering. This research focused on finding global solutions to constrained polynomial optimization problems of moderate size using NAG methods. The benefit of employing a NAG approach to nonlinear optimization problems is that every critical point of the objective function is obtained with probability-one. The NAG approach to global optimization aims to reduce computational complexity during path tracking by exploiting structure that arises from the corresponding polynomial systems. This thesis will consider applications to systems biology and life sciences where polynomials solve problems in model compatibility, model selection, and parameter estimation. Furthermore, these techniques produce mathematical models of large data sets on non-euclidean manifolds such as a disjoint union of Grassmannians. These methods will also play a role in analyzing the performance of existing local methods for solving polynomial optimization problems.Item Open Access Theory and algorithms for w-stable ideals(Colorado State University. Libraries, 2024) Ireland, Seth, author; Peterson, Chris, advisor; Cavalieri, Renzo, advisor; Gillespie, Maria, committee member; Sreedharan, Sarath, committee memberStrongly stable ideals are a class of monomial ideals which correspond to generic initial ideals in characteristic zero. Such ideals can be described completely by their Borel generators, a subset of the minimal monomial generators of the ideal. In [1], Francisco, Mermin, and Schweig develop formulas for the Hilbert series and Betti numbers of strongly stable ideals in terms of their Borel generators. In this thesis, a specialization of strongly stable ideals is presented which further restricts the subset of relevant generators. A choice of weight vector w ∈ Nn>0 restricts the set of strongly stable ideals to a subset designated as w-stable ideals. This restriction allows one to further compress the Borel generators to a subset termed the weighted Borel generators of the ideal. As in the non-weighted case, formulas for the Hilbert series and Betti numbers of strongly stable ideals can be expressed in terms of their weighted Borel generators. In computational support of this class of ideals, the new Macaulay2 package wStableIdeals.m2 has been developed and segments of its code support computations within the thesis. In a strengthening of combinatorial connections, strongly stable partitions are defined and shown to be in bijection with totally symmetric partitions.