Browsing by Author "Zhou, Yongcheng, committee member"
Now showing 1 - 10 of 10
Results Per Page
Sort Options
Item Open Access A posteriori error estimates for the Poisson problem on closed, two-dimensional surfaces(Colorado State University. Libraries, 2011) Newton, William F., author; Estep, Donald J., 1959-, advisor; Holst, Michael J., committee member; Tavener, Simon, committee member; Zhou, Yongcheng, committee member; Breidt, F. Jay, committee memberThe solution of partial differential equations on non-Euclidean Domains is an area of much research in recent years. The Poisson Problem is a partial differential equation that is useful on curved surfaces. On a curved surface, the Poisson Problem features the Laplace-Beltrami Operator, which is a generalization of the Laplacian and specific to the surface where the problem is being solved. A Finite Element Method for solving the Poisson Problem on a closed surface has been described and shown to converge with order h2. Here, we review this finite element method and the background material necessary for defining it. We then construct an adjoint-based a posteriori error estimate for the problem, discuss some computational issues that arise in solving the problem and show some numerical examples. The major sources of numerical error when solving the Poisson problem are geometric error, discretization error, quadrature error and measurement error. Geometric error occurs when distances, areas and angles are distorted by using a flat domain to parametrize a curved one. Discretization error is a result of using a finite-dimensional space of functions to approximate an infinite-dimensional space. Quadrature error arises when we use numerical quadrature to evaluate integrals necessary for the finite element method. Measurement error arises from error and uncertainty in our knowledge of the surface itself. We are able to estimate the amount of each of these types of error and show when each type of error will be significant.Item Open Access A potential vorticity diagnosis of tropical cyclone track forecast errors(Colorado State University. Libraries, 2023) Barbero, Tyler Warren, author; Bell, Michael M., advisor; Barnes, Elizabeth A., committee member; Chen, Jan-Huey, committee member; Klotzbach, Philip J., committee member; Zhou, Yongcheng, committee memberA tropical cyclone (TC) can cause significant impacts on coastal and near-coastal communities from storm surge, flooding, intense winds, and heavy rainfall. Accurately predicting TC track is crucial to providing affected populations with time to prepare and evacuate. Over the years, advancements in observational quality and quantity, numerical models, and data assimilation techniques have led to a reduction in average track errors. However, large forecast errors still occur, highlighting the need for ongoing research into the causes of track errors in models. We use the piecewise potential vorticity (PV) inversion diagnosis technique to investigate the sources of errors in track forecasts of four high-resolution numerical weather models during the hyperactive 2017 Atlantic hurricane season. The piecewise PV inversion technique is able to quantify the amount of steering, as well as steering errors, on TC track from individual large-scale pressure systems. Through the systematic use of the diagnostic tool, errors that occur consistently (model biases) could also be identified. TC movement generally follows the atmospheric flow generated by large-scale environmental pressure systems, such that errors in the simulated flow cause errors in the TC track forecast. To understand how the environment steers TCs, we use the Shapiro decomposition to remove the TC PV field from the total PV field, and the environmental (i.e., perturbation) PV field is isolated. The perturbation PV field was partitioned into six systems: the Bermuda High and the Continental High, which compose the negative environmental PV, and quadrants to the northwest, northeast, southeast, and southwest of the TC, which compose the positive environmental PV. Each piecewise PV perturbation system was inverted to retrieve the balanced mass and wind fields. To quantify the steering contribution in individual systems to TC movement, a metric called the deep layer mean steering flow (DLMSF) is defined, and errors in the forecast DLMSF were calculated by comparing the forecast to the analysis steering flow. Lag correlation analyses of DLMSF errors and track errors showed moderate-high correlation at -24 to 0 hrs in time, which indicates that track errors are caused in part by DLMSF errors. Three hurricanes (Harvey, Irma, and Maria) were analyzed in-depth and errors in their track forecasts are attributed to errors in the DLMSF. A basin-scale analysis was also performed on all hurricanes in the 2017 Atlantic hurricane season. The DLMSF mean absolute error (MAE) showed the Bermuda High was the highest contributor to error, the Continental High showed moderate error, while the four quadrants showed lower errors. High error cases were composited to examine potential model biases. On average, the composite showed lower balanced geopotential heights around the climatological position of the Bermuda High associated with the recurving of storms in the North Atlantic basin. The analysis techniques developed in this thesis aids in the identification of model biases which will lead to improved track forecasts in the future.Item Open Access A two-field finite element solver for linear poroelasticity(Colorado State University. Libraries, 2020) Wang, Zhuoran, author; Liu, Jiangguo, advisor; Tavener, Simon, advisor; Zhou, Yongcheng, committee member; Ma, Kaka, committee memberPoroelasticity models the interaction between an elastic porous medium and the fluid flowing in it. It has wide applications in biomechanics, geophysics, and soil mechanics. Due to difficulties of deriving analytical solutions for the poroelasticity equation system, finite element methods are powerful tools for obtaining numerical solutions. In this dissertation, we develop a two-field finite element solver for poroelasticity. The Darcy flow is discretized by a lowest order weak Galerkin (WG) finite element method for fluid pressure. The linear elasticity is discretized by enriched Lagrangian ($EQ_1$) elements for solid displacement. First order backward Euler time discretization is implemented to solve the coupled time-dependent system on quadrilateral meshes. This poroelasticity solver has some attractive features. There is no stabilization added to the system and it is free of Poisson locking and pressure oscillations. Poroelasticity locking is avoided through an appropriate coupling of finite element spaces for the displacement and pressure. In the equation governing the flow in pores, the dilation is calculated by taking the average over the element so that the dilation and the pressure are both approximated by constants. A rigorous error estimate is presented to show that our method has optimal convergence rates for the displacement and the fluid flow. Numerical experiments are presented to illustrate theoretical results. The implementation of this poroelasticity solver in deal.II couples the Darcy solver and the linear elasticity solver. We present the implementation of the Darcy solver and review the linear elasticity solver. Possible directions for future work are discussed.Item Open Access An investigation of the Novikov-Veselov equation: new solutions, stability and implications for the inverse scattering transform(Colorado State University. Libraries, 2012) Croke, Ryan P., author; Mueller, Jennifer, advisor; Bradley, Mark, committee member; Shipman, Patrick, committee member; Zhou, Yongcheng, committee memberIntegrable systems in two spatial dimensions have received far less attention by scholars than their one--dimensional counterparts. In this dissertation the Novikov--Veselov (NV) equation, a (2+1)--dimensional integrable system that is a generalization of the famous Korteweg de--Vreis (KdV) equation is investigated. New traveling wave solutions to the NV equation are presented along with an analysis of the stability of certain types of soliton solutions to transverse perturbations. To facilitate the investigation of the qualitative nature of various types of solutions, including solitons and their stability under transverse perturbations, a version of a pseudo-spectral numerical method introduced by Feng [J. Comput. Phys., 153(2), 1999] is developed. With this fast numerical solver some conjectures related to the inverse scattering method for the NV equation are also examined. The scattering transform for the NV equation is the same as the scattering transform used to solve the inverse conductivity problem, a problem useful in medical applications and seismic imaging. However, recent developments have shed light on the nature of the long-term behavior of certain types of solutions to the NV equation that cannot be investigated using the inverse scattering method. The numerical method developed here is used to research these exciting new developments.Item Open Access Mathematical modeling of groundwater anomaly detection(Colorado State University. Libraries, 2016) Gu, Jianli, author; Liu, Jiangguo, advisor; Carlson, Kenneth H., committee member; Zhou, Yongcheng, committee memberPublic concerns about groundwater quality have increased in recent years due to the massive exploitation of shale gas through hydraulic fracturing which raises the risk of groundwater contamination. Groundwater monitoring can fill the gap between the public fears and the industrial production. However, the studies of groundwater anomaly detection are still insufficient. The complicated sequential data patterns generated from subsurface water environment bring many challenges that need comprehensive modeling techniques in mathematics, statistics and machine learning for effective solutions. In this reseach, Multivariate State Estimation Technique (MSET) and One-class Support Vector Machine (1-SVM) methods are utilized and improved for real-time groundwater anomaly detection. The effectiveness of the two methods are validated based upon different data patterns coming from the historic data of Colorado Water Watch (CWW) program. Meanwhile, to ensure the real-time responsiveness of these methods, a groundwater event with contaminant transport was simulated by means of finite difference methods (FDMs). The numerical results indicate the change of contaminant concentration of chloride with groundwater flow over time. By coupling the transport simulation and groundwater monitoring, the reliability of these methods for detecting groundwater contamination event is tested. This research resolves issues encountered when conducting real-time groundwater monitoring, and the implementation of these methods based on Python can be easily transfered and extended to engineering practices.Item Open Access Mathematical models for HIV-1 viral capsid structure and assembly(Colorado State University. Libraries, 2015) Sadre-Marandi, Farrah, author; Liu, Jiangguo, advisor; Tavener, Simon, advisor; Chen, Chaoping, committee member; Hulpke, Alexander, committee member; Zhou, Yongcheng, committee memberHIV-1 (human immunodeficiency virus type 1) is a retrovirus that causes the acquired immunodeficiency syndrome (AIDS). This infectious disease has high mortality rates, encouraging HIV-1 to receive extensive research interest from scientists of multiple disciplines. Group-specific antigen (Gag) polyprotein precursor is the major structural component of HIV. This protein has 4 major domains, one of which is called the capsid (CA). These proteins join together to create the peculiar structure of HIV-1 virions. It is known that retrovirus capsid arrangements represent a fullerene-like structure. These caged polyhedral arrangements are built entirely from hexamers (6 joined proteins) and exactly 12 pentamers (5 proteins) by the Euler theorem. Different distributions of these 12 pentamers result in icosahedral, tubular, or the unique HIV-1 conical shaped capsids. In order to gain insight into the distinctive structure of the HIV capsid, we develop and analyze mathematical models to help understand the underlying biological mechanisms in the formation of viral capsids. The pentamer clusters introduce declination and hence curvature on the capsids. The HIV-1 capsid structure follows a (5,7)-cone pattern, with 5 pentamers in the narrow end and 7 in the broad end. We show that the curvature concentration at the narrow end is about five times higher than that at the broad end. This leads to a conclusion that the narrow end is the weakest part on the HIV-1 capsid and a conjecture that “the narrow end closes last during maturation but opens first during entry into a host cell.” Models for icosahedral capsids are established and well-received, but models for tubular and conical capsids need further investigation. We propose new models for the tubular and conical capsid based on an extension of the Caspar-Klug quasi-equivalence theory. In particular, two and three generating vectors are used to characterize respectively the lattice structures of tubular and conical capsids. Comparison with published HIV-1 data demonstrates a good agreement of our modeling results with experimental data. It is known that there are two stages in the viral capsid assembly: nucleation (formation of a nuclei: hexamers) and elongation (building the closed shell). We develop a kinetic model for modeling HIV-1 viral capsid nucleation using a 6-species dynamical system. Numerical simulations of capsid protein (CA) multimer concentrations closely match experimental data. Sensitivity and elasticity analysis of CA multimer concentrations with respect to the association and disassociation rates further reveals the importance of CA dimers in the nucleation stage of viral capsid self-assembly.Item Open Access Problems on decision making under uncertainty(Colorado State University. Libraries, 2019) Sarkale, Yugandhar, author; Chong, Edwin K. P., advisor; Young, Peter, committee member; Luo, J. Rockey, committee member; Zhou, Yongcheng, committee memberHumans and machines must often make rational choices in the face of uncertainty. Determining decisions, actions, choices, or alternatives that optimize objectives for real-world problems is computationally difficult. This dissertation proposes novel solutions to such optimization problems for both deterministic and stochastic cases; the proposed methods maintain near-optimal solution quality. Even though the applicability of the techniques developed in our work cannot be limited to a few examples, the applications addressed in our work include post-hazard large-scale real-world community recovery management, path planning of UAVs by incorporating feedback from intelligence assets, and closed-loop, urban target tracking in challenging environments. As an illustration of the properties shared by the solutions developed in this dissertation, we will describe the example of community recovery in depth. In the work associated with community recovery, we handle both deterministic and stochastic recovery decisions. For the deterministic problems (outcome of recovery actions is deterministic but we handle the uncertainty in the underlying models), we develop a sequential discrete-time decision-making framework and compute the near-optimal decisions for a community modeled after Gilroy, California. We have designed stochastic models to calculate the damage to the infrastructures systems within the community after an occurrence of an earthquake. Our optimization framework to compute the recovery decisions, which is hazard agnostic (the hazard could be a nuclear explosion or a disruptive social event), is based on an approximate dynamic programming paradigm of rollout; we have modeled the recovery decisions as string of actions. We design several base heuristics pertaining to the problem of community recovery to be used as a base heuristic in our framework; in addition, we also explore the performance of random heuristics. In addition to modeling the interdependence between several networks and the cascading effect of a single recovery action on these networks, we also fuse the traditional optimization approaches, such as simulated annealing, to compute efficient decisions, which mitigates the simultaneous spatial and temporal evolution of the recovery problem. For the stochastic problems, in addition to the previous complexities, the outcome of the decisions is stochastic. Inclusion of this single complexity in the problem statement necessitates an entirely novel way of developing solutions. We formulate the recovery problem in the powerful framework of Markov Decision Processes (MDPs). In contrast to the conventional matrix-based representation, we have formulated our problem as a simulation-based MDP. Classical solutions to solve an MDP are inadequate; therefore, approximation to compute the Q-values (based on Bellman's equation) is necessary. In our framework, we have employed Approximate Policy Improvement to circumvent the limitation with the classical techniques. We have also addressed the risk-attitudes of the policymakers and the decision-makers, who are a key stakeholder in the recovery process. Despite the use of a state-of-the-art computational platform, additional optimization must be made to the resultant stochastic simulation optimization problem owing to the massive size of the recovery problem. Our solutions are calculated using one of the best performing simulation optimization method of Optimal Computing Budget Allocation. Further, in the stochastic setting, scheduling of decisions for the building portfolio recovery is even more computationally difficult than some of the other coarsely-modeled networks like Electric Power Networks (EPN). Our work proposes a stochastic non-preemptive scheduling framework to address this challenging problem at scale. For the stochastic problems, one of the major highlights of this dissertation is the decision-automation framework for EPN recovery. The novel decision-making-under-uncertainty algorithms developed to plan sequential decisions for EPN recovery demonstrate a state-of-the-art performance; our algorithms should be of interest to practitioners in several fields—those that deal with real-world large-scale problem of selecting a single choice given a massive number of alternatives. The quality of recovery decisions calculated using the decision-automation framework does not deteriorate despite a massive increase in the size of the recovery problem. Even though the focus of this dissertation is primarily on application to recovery of communities affected by hazards, our algorithms contributes to the general problem of MDPs with massive action spaces. The primary objective of our work in the community recovery problem is to address the issue of food security. Particularly, we address the objective of making the community food secure to the pre-hazard levels in minimum amount of time or schedule the recovery actions so that maximum number of people are food secure after a sequence of decisions. In essence, our framework accommodates the stochastic hazard models, handles the stochastic nature of outcome of human or machine repair actions, has lookahead, does not suffer from decision fatigue, and incorporates the current policies of the decision makers. The decisions calculated using our framework have been aided by the free availability of a powerful supercomputer.Item Open Access The mathematical modeling and analysis of nonlocal ecological invasions and savanna population dynamics(Colorado State University. Libraries, 2013) Strickland, William Christopher, author; Dangelmayr, Gerhard, advisor; Shipman, Patrick, advisor; Zhou, Yongcheng, committee member; Brown, Cynthia, committee memberThe main focus of this dissertation is the development and analysis of two new mathematical models that individually address major open problems in ecology. The first challenge is to characterize and model the processes that result in a savanna ecosystem as a stable state between grassland and forest, and the second involves modeling the non-local spread of a biological invader over heterogeneous terrain while incorporating the influence of a mass transportation network on the system. Both models utilize and compare work done in other, often more opaque, modeling paradigms to better develop transparent and application-ready solutions which can be easily adapted and inform ecological work done in the field. Savanna is defined by the coexistence of trees and grass in seasonally dry areas of the tropics and sub-tropics, but there is no consensus as to why savanna occurs as a stable state between tropical grassland and forest. To understand the dynamics behind the tree-grass relationship, we begin by reviewing and analyzing approaches in currently available savanna models. Next, we develop a mathematical model for savanna water resource dynamics based on FLAMES, an Australian process-based software model created to capture the effects of seasonal rainfall and fire disturbance on savanna tree stands. As a mathematically explicit dynamical system represented by coupled differential equations, the new model immediately has the advantage of being concise and transparent compared to previous models, yet still robust in its ability to account for different climate and soil characteristics. Through analytical analysis of the model, we show a clear connection between climate and stand structure, with particular emphasis on the length and severity of the dry season. As a result, we can numerically quantify the parameter space of year-by-year stochastic variability in stand structure based on rainfall and fire probabilities. This results in a characterization of savanna existence in the absence of extreme fire suppression based on the availability of water resources in the soil due to climate and ground water retention. One example of the model's success is its ability to predict a savanna environment for Darwin, Australia and a forest environment for Sydney, even though Sydney receives less annual rainfall than Darwin. The majority of this dissertation focuses on modeling the spread of a biological invader in heterogeneous domains, where invasion often takes place non-locally, through nearby human transportation networks. Since early detection and ecological forecasting of invasive species is urgently needed for rapid response, accurately modeling invasions remains a high priority for resource managers. To achieve this goal, we begin by revisiting a particular class of deterministic contact models obtained from a stochastic birth process for invasive organisms. We then derive a deterministic integro-differential equation of a more general contact model and show that the quantity of interest may be interpreted not as population size, but rather as the probability of species occurrence. We then proceed to show how landscape heterogeneity can be included in the model by utilizing the concept of statistical habitat suitability models which condense diverse ecological data into a single statistic. Next, we develop a model for vector-based epidemic transport on a network as represented by a strongly connected, directed graph, and analytically compute the exact optimal control for suppression of the infected graph vectors. Since this model does not require any special assumptions about the underlying spatiotemporal epidemic spread process, it should prove suitable in a variety of application contexts where network based disease vector dynamics need to be understood and properly controlled. We then discuss other methods of control for the special case of the integro-differential model developed previously and explore numerical results of applying this control. Finally, we validate model results for the Bromus tectorum invasion of Rocky Mountain National Park using data collected by ecologists over the past two decades, and illustrate the effect of various controls on this data. A final chapter concerns a problem of cognitive population dynamics, namely vowel pronunciation in natural languages. We begin by developing a structured population approach to modeling changes in vowel systems, taking into account learning patterns and effects such as social trends. Our model treats vowel pronunciation as a continuous variable in vowel space and allows for continuous dependence of vowel pronunciation on time and age of the speaker. The theory of mixtures with continuous diversity provides a framework for the model, which extends the McKendrick-von Foerster equation to populations with age and phonetic structures. Numerical integrations of the model reveal how shifts in vowel pronunciation may occur in jumps or continuously given perturbations such as the influx of an immigrant population.Item Open Access Unsupervised binary code learning for approximate nearest neighbor search in large-scale datasets(Colorado State University. Libraries, 2016) Zhang, Hao, author; Beveridge, Ross, advisor; Draper, Bruce, advisor; Anderson, Chuck, committee member; Zhou, Yongcheng, committee memberNearest neighbor search is an important operation whose goal is to find items in the dataset that are similar to a given query. It has a number of applications such as content based image retrieval (CBIR), near duplicate image detection and recommender systems. With the rapid development of the Internet and digital devices, it becomes easy to share and collect data. Taking a modern social network as an example, Facebook was reported in 2012 to be collecting more than 500 terabytes of text, images and videos each day. Conventional nearest neighbor search using linear scan becomes prohibitive when dealing with large-scale datasets like this. This thesis proposed a new quantization-based binary code learning algorithm, called Unit Query and Location Sensitive Hashing (UnitQLSH), to solve the problem of approximate nearest neighbor search for large-scale, unsupervised and unit-length data. UnitQLSH maps each high dimensional data sample to a binary code constrained to be residing on the unit-sphere. This constraint is very helpful in improving the retrieval performance. Also, UnitQLSH takes advantage of the approximate linearity of local neighborhoods of data to further improve performance. Moreover, given a query, a weight vector is computed based on it, indicating the significance of different bits. The Hamming distances are weighed by this vector to provide much more accurate retrievals than traditional approaches without any weighting schemes. Compared to existing state-of-the-art approaches, the proposed algorithm outperforms them significantly.Item Open Access Weak Galerkin finite element methods for elasticity and coupled flow problems(Colorado State University. Libraries, 2020) Harper, Graham Bennett, author; Liu, Jiangguo, advisor; Bangerth, Wolfgang, committee member; Guzik, Stephen, committee member; Tavener, Simon, committee member; Zhou, Yongcheng, committee memberWe present novel stabilizer-free weak Galerkin finite element methods for linear elasticity and coupled Stokes-Darcy flow with a comprehensive treatment of theoretical results and the numerical methods for each. Weak Galerkin finite element methods take a discontinuous approximation space and bind degrees of freedom together through the discrete weak gradient, which involves solving a small symmetric positive-definite linear system on every element of the mesh. We introduce notation and analysis using a general framework that highlights properties that unify many existing weak Galerkin methods. This framework makes analysis for the methods much more straightforward. The method for linear elasticity on quadrilateral and hexahedral meshes uses piecewise constant vectors to approximate the displacement on each cell, and it uses the Raviart-Thomas space for the discrete weak gradient. We use the Schur complement to simplify the solution of the global linear system and increase computational efficiency further. We prove first-order convergence in the L2 norm, verify our analysis with numerical experiments, and compare to another weak Galerkin approach for this problem. The method for coupled Stokes-Darcy flow uses an extensible multinumerics approach on quadrilateral meshes. The Darcy flow discretization uses a weak Galerkin finite element method with piecewise constants approximating pressure and the Arbogast-Correa space for the weak gradient. The Stokes domain discretization uses the classical Bernardi-Raugel pair. We prove first-order convergence in the energy norm and verify our analysis with numerical experiments. All algorithms implemented in this dissertation are publicly available as part of James Liu's DarcyLite and Darcy+ packages and as part of the deal.II library.