Browsing by Author "Estep, Donald, committee member"
Now showing 1 - 9 of 9
Results Per Page
Sort Options
Item Open Access A fourth-order solution-adaptive finite-volume algorithm for compressible reacting flows on mapped domains(Colorado State University. Libraries, 2019) Owen, Landon, author; Gao, Xinfeng, advisor; Guzik, Stephen, committee member; Marchese, Anthony, committee member; Estep, Donald, committee memberAccurate computational modeling of reacting flows is necessary to improve the design combustion efficiency and emission reduction in combustion devices, such as gas turbine engines. Combusting flows consists of a variety of phenomena including fluid mixing, chemical kinetics, turbulence-chemistry interacting dynamics, and heat and mass transfer. The scales associated with these range from atomic scales up to continuum scales at device level. Therefore, combusting flows are strongly nonlinear and require multiphysics and multiscale modeling. This research employs a fourth-order finite-volume method and leverages increasing gains in modern computing power to achieve high-fidelity modeling of flow characteristics and combustion dynamics. However, it is challenging to ensure that computational models are accurate, stable, and efficient due to the multiscale and multiphysics nature of combusting flows. Therefore, the goal of this research is to create a robust, high-order finite-volume algorithm on mapped domains with adaptive mesh refinement to solve compressible combustion problems in relatively complex geometries on parallel computing architecture. There are five main efforts in this research. The first effort is to extend the existing algorithm to solve the compressible Navier-Stokes equations on mapped domains by implementing the fourth-order accurate viscous discretization operators. The second effort is to incorporate the species transport equations and chemical kinetics into the solver to enable combustion modeling. The third effort is to ensure stability of the algorithm for combustion simulations over a wide range of speeds. The fourth effort is to ensure all new functionality utilizes the parallel adaptive mesh refinement infrastructure to achieve efficient computations on high-performance computers. The final goal is to utilize the algorithm to simulate a range of flow problems, including a multispecies flow with Mach reflection, multispecies mixing flow through a planar burner, and oblique detonation waves over a wedge. This research produces a verified and validated, fourth-order finite-volume algorithm for solving thermally perfect, compressible, chemically reacting flows on mapped domains that are adaptively refined and represent moderately complex geometries. In the future, the framework established in this research will be extended to model reactive flows in gas turbine combustors.Item Open Access Biophysical behavior in tropical South America(Colorado State University. Libraries, 2011) Baker, Ian Timothy, author; Denning, A. Scott, advisor; Randall, David, committee member; Coughenour, Michael, committee member; Gao, Wei, committee member; Estep, Donald, committee memberTo view the abstract, please see the full text of the document.Item Open Access Compressive measurement design for detection and estimation of sparse signals(Colorado State University. Libraries, 2013) Zahedi, Ramin, author; Chong, Edwin K. P., advisor; Pezeshki, Ali, advisor; Estep, Donald, committee member; Young, Peter M., committee memberWe study the problem of designing compressive measurement matrices for two sets of problems. In the first set, we consider the problem of adaptively designing compressive measurement matrices for estimating time-varying sparse signals. We formulate this problem as a Partially Observable Markov Decision Process (POMDP). This formulation allows us to use Bellman's principle of optimality in the implementation of multi-step lookahead designs of compressive measurements. We introduce two variations of the compressive measurement design problem. In the first variation, we consider the problem of selecting a prespecified number of measurement vectors from a predefined library as entries of the compressive measurement matrix at each time step. In the second variation, the number of compressive measurements, i.e., the number of rows of the measurement matrix, is adaptively chosen. Once the number of measurements is determined, the matrix entries are chosen according to a prespecified adaptive scheme. Each of these two problems is judged by a separate performance criterion. The gauge of efficiency in the first problem is the conditional mutual information between the sparse signal support and the measurements. The second problem applies a linear combination of the number of measurements and the conditional mutual information as the performance measure. We present several simulations in which the primary focus is the application of a method known as rollout. The significant computational load for using rollout has also inspired us to adapt two data association heuristics in our simulations to the compressive sensing paradigm. These heuristics show promising decreases in the amount of computation for propagating distributions and searching for optimal solutions. In the second set of problems, we consider the problem of testing for the presence (or detection) of an unknown static sparse signal in additive white noise. Given a fixed measurement budget, much smaller than the dimension of the signal, we consider the general problem of designing compressive measurements to maximize the measurement signal-to-noise ratio (SNR), as increasing SNR improves the detection performance in a large class of detectors. We use a lexicographic optimization approach, where the optimal measurement design for sparsity level k is sought only among the set of measurement matrices that satisfy the optimality conditions for sparsity level k-1. We consider optimizing two different SNR criteria, namely a worst-case SNR measure, over all possible realizations of a k-sparse signal, and an average SNR measure with respect to a uniform distribution on the locations of the up to k nonzero entries in the signal. We establish connections between these two criteria and certain classes of tight frames. We constrain our measurement matrices to the class of tight frames to avoid coloring the noise covariance matrix. For the worst-case problem, we show that the optimal measurement matrix is a Grassmannian line packing for most—and a uniform tight frame for all—sparse signals. For the average SNR problem, we prove that the optimal measurement matrix is a uniform tight frame with minimum sum-coherence for most—and a tight frame for all—sparse signals.Item Open Access Continuum limits of Markov chains with application to wireless network modeling and control(Colorado State University. Libraries, 2014) Zhang, Yang, author; Chong, Edwin K. P., advisor; Estep, Donald, committee member; Luo, J. Rockey, committee member; Pezeshki, Ali, committee memberWe investigate the continuum limits of a class of Markov chains. The investigation of such limits is motivated by the desire to model networks with a very large number of nodes. We show that a sequence of such Markov chains indexed by N , the number of components in the system that they model, converges in a certain sense to its continuum limit, which is the solution of a partial differential equation (PDE), as N goes to infinity. We provide sufficient conditions for the convergence and characterize the rate of convergence. As an application we approximate Markov chains modeling large wireless networks by PDEs. We first describe PDE models for networks with uniformly located nodes, and then generalize to networks with nonuniformly located, and possibly mobile, nodes. While traditional Monte Carlo simulation for very large networks is practically infeasible, PDEs can be solved with reasonable computation overhead using well-established mathematical tools. Based on the PDE models, we develop a method to control the transmissions in nonuniform networks so that the continuum limit is invariant under perturbations in node locations. This enables the networks to maintain stable global characteristics in the presence of varying node locations.Item Open Access Improvements in computational electromagnetics solver efficiency: theoretical and data-driven approaches to accelerate full-wave and ray-based methods(Colorado State University. Libraries, 2020) Key, Cam, author; Notaros, Branislav, advisor; Pezeshki, Ali, committee member; Estep, Donald, committee member; Ilić, Milan, committee memberSimulation plays an ever-increasing role in modern electrical engineering design. However, the computational electromagnetics solvers on which these simulations rely are often inefficient. For simulations requiring high accuracy, full-wave techniques like finite element method and method of moments dominate, yet existing practices for these techniques frequently allocate degrees of freedom sub-optimally, yielding longer solve times than necessary for a given accuracy. For larger-scale simulations, frequency-asymptotic methods like shooting-bouncing ray tracing dominate, yet existing algorithms suffer from incomplete parallelizability and are consequently unable to take full advantage of modern massively parallel computing resources. We present several approaches, both theoretical and empirical, to address these efficiency problems.Item Open Access Joint tail modeling via regular variation with applications in climate and environmental studies(Colorado State University. Libraries, 2013) Weller, Grant B., author; Cooley, Dan, advisor; Breidt, F. Jay, committee member; Estep, Donald, committee member; Schumacher, Russ, committee memberThis dissertation presents applied, theoretical, and methodological advances in the statistical analysis of multivariate extreme values, employing the underlying mathematical framework of multivariate regular variation. Existing theory is applied in two studies in climatology; these investigations represent novel applications of the regular variation framework in this field. Motivated by applications in environmental studies, a theoretical development in the analysis of extremes is introduced, along with novel statistical methodology. This work first details a novel study which employs the regular variation modeling framework to study uncertainties in a regional climate model's simulation of extreme precipitation events along the west coast of the United States, with a particular focus on the Pineapple Express (PE), a special type of winter storm. We model the tail dependence in past daily precipitation amounts seen in observational data and output of the regional climate model, and we link atmospheric pressure fields to PE events. The fitted dependence model is utilized as a stochastic simulator of future extreme precipitation events, given output from a future-scenario run of the climate model. The simulator and link to pressure fields are used to quantify the uncertainty in a future simulation of extreme precipitation events from the regional climate model, given boundary conditions from a general circulation model. A related study investigates two case studies of extreme precipitation from six regional climate models in the North American Regional Climate Change Assessment Program (NARCCAP). We find that simulated winter season daily precipitation along the Pacific coast exhibit tail dependence to extreme events in the observational record. When considering summer season daily precipitation over a central region of the United States, however, we find almost no correspondence between extremes simulated by NARCCAP and those seen in observations. Furthermore, we discover less consistency among the NARCCAP models in the tail behavior of summer precipitation over this region than that seen in winter precipitation over the west coast region. The analyses in this work indicate that the NARCCAP models are effective at downscaling winter precipitation extremes in the west coast region, but questions remain about their ability to simulate summer-season precipitation extremes in the central region. A deficiency of existing modeling techniques based on the multivariate regular variation framework is the inability to account for hidden regular variation, a feature of many theoretical examples and real data sets. One particular example of this deficiency is the inability to distinguish asymptotic independence from independence in the usual sense. This work develops a novel probabilistic characterization of random vectors possessing hidden regular variation as the sum of independent components. The characterization is shown to be asymptotically valid via a multivariate tail equivalence result, and an example is demonstrated via simulation. The sum characterization is employed to perform inference for the joint tail of random vectors possessing hidden regular variation. This dissertation develops a likelihood-based estimation procedure, employing a novel version of the Monte Carlo expectation-maximization algorithm which has been modified for tail estimation. The methodology is demonstrated on simulated data and applied to a bivariate series of air pollution data from Leeds, UK. We demonstrate the improvement in tail risk estimates offered by the sum representation over approaches which ignore hidden regular variation in the data.Item Open Access Matter effects on neutrino oscillations(Colorado State University. Libraries, 2013) Gordon, Michael, author; Toki, Walter, advisor; Wilson, Robert, committee member; Estep, Donald, committee memberAn introduction to neutrino oscillations in vacuum is presented, followed by a survey of various techniques for obtaining either exact or approximate expressions for νμ→ νe oscillations in matter. The method developed by Arafune, Koike, and Sato uses a perturbative analysis to find an approximation for the evolution operator. The method used by Freund yields an approximate oscillation probability by diagonalizing the Hamiltonian, finding the eigenvalues and eigenvectors, and then using those to find modified mixing angles with the matter effect taken into account. The method devised by Mann, Kafka, Schneps, and Altinok produces an exact expression for the oscillation by determining explicitly the evolution operator. These methods are compared to each other using the T2K, MINOS, NOνA, and LBNE parameters.Item Open Access Performance assessment of multi-walled carbon nanotube interconnects using advanced polynomial chaos schemes(Colorado State University. Libraries, 2019) Bhatnagar, Sakshi, author; Nikdast, Mahdi, advisor; Pezeshki, Ali, committee member; Estep, Donald, committee memberWith the continuous miniaturization in the latest VLSI technologies, manufacturing uncertainties at nanoscale processes and operations are unpredictable at the chip level, packaging level and at board levels of integrated systems. To overcome such issues, simulation solvers to model forward propagation of uncertainties or variations in random processes at the device level to the network response are required. Polynomial Chaos Expansion (PCE) of the random variables is the most common technique to model the unpredictability in the systems. Existing methods for uncertainty quantification have a major drawback that as the number of random variables in a system increases, its computational cost and time increases in a polynomial fashion. In order to alleviate the poor scalability of standard PC approaches, predictor-corrector polynomial chaos scheme and hyperbolic polynomial chaos expansion (HPCE) scheme are being proposed in this dissertation. In the predictor-corrector polynomial scheme, low-fidelity meta-model is generated using Equivalent Single Conductor (ESC) approximation model and then its accuracy is enhanced using low order multi-conductor circuit (MCC) model called a corrector model. In HPCE, sparser polynomial expansion is generated based on the hyperbolic criterion. These schemes result in an immense reduction in CPU cost and speed. This dissertation presents the novel approach to quantify the uncertainties in multi-walled carbon nano-tubes using these schemes. The accuracy and validation of these schemes are shown using various numerical examples.Item Open Access The conformal perfectly matched layer for electrically large curvilinear higher order finite element methods in electromagnetics(Colorado State University. Libraries, 2017) Smull, Aaron P., author; Notaros, Branislav, advisor; Pezeshki, Ali, committee member; Estep, Donald, committee memberThe implementation of open-region boundary conditions in computational electromagnetics for higher order finite element methods presents a well known set of challenges. One such boundary condition is known as the perfectly matched layer. In this thesis, the generation of perfectly matched layers for arbitrary convex geometric hexahedral meshes is discussed, using a method that can be implemented without differential operator based absorbing boundary conditions or coupling to boundary integral equations. A method for automated perfectly matched layer element generation is presented, with geometries based on surface projections from a convex mesh. Material parameters are generated via concepts from transformation electromagnetics, from complex-coordinate transformation based conformal PML's in existing literature. A material parameter correction algorithm is also presented, based on a modified gradient descent optimization algorithm Numerical results are presented with comparison to analytical results and commercial software, with studies on the effects of discretization error of the effectiveness of the perfectly matched layer. Good agreement is found between simulated and analytical results, and between simulated results and commercial software.