Browsing by Author "Pinaud, Olivier, committee member"
Now showing 1 - 15 of 15
Results Per Page
Sort Options
Item Open Access A constrained optimization model for partitioning students into cooperative learning groups(Colorado State University. Libraries, 2016) Heine, Matthew Alan, author; Kirby, Michael, advisor; Pinaud, Olivier, committee member; Henry, Kimberly, committee memberThe problem of the constrained partitioning of a set using quantitative relationships amongst the elements is considered. An approach based on constrained integer programming is proposed that permits a group objective function to be optimized subject to group quality constraints. A motivation for this problem is the partitioning of students, e.g., in middle school, into groups that target educational objectives. The method is compared to another grouping algorithm in the literature on a data set collected in the Poudre School District.Item Open Access Computational advancements in the D-bar reconstruction method for 2-D electrical impedance tomography(Colorado State University. Libraries, 2016) Alsaker, Melody, author; Mueller, Jennifer L., advisor; Cheney, Margaret, committee member; Notaros, Branislav, committee member; Pinaud, Olivier, committee memberWe study the problem of reconstructing 2-D conductivities from boundary voltage and current density measurements, also known as the electrical impedance tomography (EIT) problem, using the D-bar inversion method, based on the 1996 global uniqueness proof by Adrian Nachman. We focus on the computational implementation and efficiency of the D-bar algorithm, its application to finite-precision practical data in human thoracic imaging, and the quality and spatial resolution of the resulting reconstructions. The main contributions of this work are (1) a parallelized computational implementation of the algorithm which has been shown to run in real-time, thus demonstrating the feasibility of the D-bar method for use in real-time bedside imaging, and (2) a modification of the algorithm to include \emph{a priori} data in the form of approximate organ boundaries and (optionally) conductivity estimates, which we show to be effective in improving spatial resolution in the resulting reconstructions. These computational advancements are tested using both numerically simulated data as well as experimental human and tank data collected using the ACE1 EIT machine at CSU. In this work, we provide details regarding the theoretical background and practical implementation for each advancement, we demonstrate the effectiveness of the algorithm modifications through multiple experiments, and we provide discussion and conclusions based on the results.Item Open Access Connections between climate sensitivity and large-scale extratropical dynamics(Colorado State University. Libraries, 2019) Davis, Luke L. B., author; Thompson, David W. J., advisor; Birner, Thomas, advisor; Randall, David A., committee member; Pinaud, Olivier, committee memberThe response of the extratropical storm tracks to anthropogenic forcing is one of the most important but poorly understood aspects of climate change. The direct, thermodynamic effects of climate change are relatively well understood, but their two-way interactions with large-scale extratropical dynamics are extremely difficult to predict. There is thus continued need for a robust understanding of how this coupling evolves in space and time. The dry dynamical core represents one of the simplest possible numerical models for studying the response of the extratropical storm tracks to climate change. In the model, the extratropical circulation is forced by relaxing to a radiative equilibrium profile using linear damping. The linear damping coefficient plays an essential role in governing the structure of the circulation. But despite decades of research with the dry dynamical core, the role of the damping coefficient in governing the circulation has received relatively little scrutiny. In this thesis, we systematically vary the damping rate and the equilibrium temperature field in a dry dynamical core in order to understand how the amplitude of the damping influences extratropical dynamics. Critically, we prove that the damping rate is a measure of the climate sensitivity of the dry atmosphere. The key finding is that the structure of the extratropical circulation is a function of the climate sensitivity. Larger damping timescales – which are equivalent to higher climate sensitivities – lead to a less dynamically active extratropical circulation, equatorward shifts in the jet, and a background state that is almost neutral to baroclinic instability. They also lead to increases in the serial correlation and relative strength of the annular modes of climate variability. It is argued that the climate sensitivity of the dry atmosphere may be identifiable from its dynamical signatures, and that understanding the response of the circulation to climate change is critically dependent on understanding its climate sensitvity.Item Open Access Data-driven methods for compact modeling of stochastic processes(Colorado State University. Libraries, 2024) Johnson, Mats S., author; Aristoff, David, advisor; Cheney, Margaret, committee member; Pinaud, Olivier, committee member; Krapf, Diego, committee memberStochastic dynamics are prevalent throughout many scientific disciplines where finding useful compact models is an ongoing pursuit. However, the simulations involved are often high-dimensional, complex problems necessitating vast amounts of data. This thesis addresses two approaches for handling such complications, coarse graining and neural networks. First, by combining Markov renewal processes with Mori-Zwanzig theory, coarse graining error can be eliminated when modeling the transition probabilities of the system. Second, instead of explicitly defining the low-dimensional approximation, using kernel approximations and a scaling matrix the appropriate subspace is uncovered through iteration. The algorithm, named the Fast Committor Machine, applies the recent Recursive Feature Machine of Radhakrishnan et al. to the committor problem using randomized numerical linear algebra. Both projects outline practical data-driven methods for estimating quantities of interest in stochastic processes that are tunable with only a few hyperparameters. The success of these methods is demonstrated numerically against standard methods on the biomolecule alanine dipeptide.Item Open Access Electrical impedance tomography with Calderón's method in two and three dimensions(Colorado State University. Libraries, 2020) Shin, Kwancheol, author; Mueller, Jennifer L., advisor; Cheney, Margaret, committee member; Pinaud, Olivier, committee member; Hussam, Mahmoud, committee memberElectrical impedance tomography (EIT) is a non-invasive imaging technique in which electrical measurements on the electrodes attached to the boundary of a subject are used to reconstruct the electrical properties of the subject. That is, voltage data arising from currents applied on the boundary are used to reconstruct the conductivity distribution in the interior. Calderón's method is a direct linearized reconstruction method for the inverse conductivity problem with the attributes that it can provide absolute images with no need for forward modeling, reconstructions can be computed in real-time, and both conductivity and permittivity can be reconstructed. In this three-paper dissertation, first, an explicit relationship between Calderón's method and the D-bar method is provided, facilitating a "higher-order" Calderón's method in which a correction term is included, derived from the relationship to the D-bar method. Furthermore, a method of including a spatial prior is provided. These advances are demonstrated on tank data collected with the ACE1 EIT system. On the other hand, it has been demonstrated that various EIT reconstruction algorithms are very sensitive to the measurement and incorrect modeling of the boundary shape. Calderón's method has been implemented with correct boundary shape, but the exact location of the electrodes are disregarded as they are assumed to be spaced uniformly in angle. In the second body of work, Calderón's method is implemented with a new expansion technique which enables the use of the correct location of the electrodes as well as the shape of the boundary resulting in improved absolute images. We test our new algorithm with experimental data collected with the ACE1 EIT system. Finally, the first implementation of Calderón's method on a 3-D cylindrical domain with data collected on a portion of the boundary is provided. The effectiveness of the method to localize inhomogeneities in the plane of the electrodes and in the z-direction is demonstrated on simulated and experimental data.Item Open Access General model-based decomposition framework for polarimetric SAR images(Colorado State University. Libraries, 2017) Dauphin, Stephen, author; Cheney, Margaret, advisor; Kirby, Michael, committee member; Pinaud, Olivier, committee member; Morton, Jade, committee memberPolarimetric synthetic aperture radars emit a signal and measure the magnitude, phase, and polarization of the return. Polarimetric decompositions are used to extract physically meaningful attributes of the scatterers. Of these, model-based decompositions intend to model the measured data with canonical scatter-types. Many advances have been made to this field of model-based decomposition and this work is surveyed by the first portion of this dissertation. A general model-based decomposition framework (GMBDF) is established that can decompose polarimetric data with different scatter-types and evaluate how well those scatter-types model the data by comparing a residual term. The GMBDF solves for all the scatter-type parameters simultaneously that are within a given decomposition by minimizing the residual term. A decomposition with a lower residual term contains better scatter-type models for the given data. An example is worked through that compares two decompositions with different surface scatter-type models. As an application of the polarimetric decomposition analysis, a novel terrain classification algorithm of polSAR images is proposed. In the algorithm, the results of state-of-the-art polarimetric decompositions are processed for an image. Pixels are then selected to represent different terrain classes. Distributions of the parameters of these selected pixels are determined for each class. Each pixel in the image is given a score according to how well its parameters fit the parameter distributions of each class. Based on this score, the pixel is either assigned to a predefined terrain class or labeled unclassified.Item Open Access GPS equatorial ionospheric scintillation signals simulation, characterization, and estimation(Colorado State University. Libraries, 2019) Xu, Dongyang, author; Morton, Yu, advisor; Rino, Charles, committee member; van Graas, Frank, committee member; Pinaud, Olivier, committee memberStrong equatorial ionospheric scintillation is characterized with simultaneous deep amplitude fading and fast phase fluctuations, which can severely degrade GNSS receiver performance and impact a variety of GNSS applications. This dissertation addresses the equatorial ionospheric scintillation effects on GNSS signals in three aspects: simulation, characterization, and estimation. The first part of the dissertation presents a physics-based, strong scintillation simulator that requires only two scintillation indicators as input parameters, with validation results using a large amount of real scintillation data. In order to improve the accuracy of carrier phase estimation, a semi-open loop algorithm is developed in the second part of the dissertation. The performance of this algorithm is evaluated using the developed simulator against two other state-of-the-art algorithms and shows improved performance in terms of reduced cycle slip occurrences and estimation error. In the third part, the scintillation signal characterization is conducted using a large amount of real strong scintillation data from Ascension Island. Statistical summaries are obtained, including the temporal characteristics of and correlation between fast phase changes and deep fades and the statistical relationship between the data bit decoding error occurrences and the intensity of amplitude scintillation.Item Open Access Heavy tail analysis for functional and internet anomaly data(Colorado State University. Libraries, 2021) Kim, Mihyun, author; Kokoszka, Piotr, advisor; Cooley, Daniel, committee member; Meyer, Mary, committee member; Pinaud, Olivier, committee memberThis dissertation is concerned with the asymptotic theory of statistical tools used in extreme value analysis of functional data and internet anomaly data. More specifically, we study four problems associated with analyzing the tail behavior of functional principal component scores in functional data and interarrival times of internet traffic anomalies, which are available only with a round-off error. The first problem we consider is the estimation of the tail index of scores in functional data. We employ the Hill estimator for the tail index estimation and derive conditions under which the Hill estimator computed from the sample scores is consistent for the tail index of the unobservable population scores. The second problem studies the dependence between extremal values of functional scores using the extremal dependence measure (EDM). After extending the EDM defined for positive bivariate observations to multivariate observations, we study conditions guaranteeing that a suitable estimator of the EDM based on these scores converges to the population EDM and is asymptotically normal. The third and last problems investigate the asymptotic and finite sample behavior of the Hill estimator applied to heavy-tailed data contaminated by errors. For the third one, we show that for time series models often used in practice, whose non–contaminated marginal distributions are regularly varying, the Hill estimator is consistent. For the last one, we formulate conditions on the errors under which the Hill and Harmonic Moment estimators applied to i.i.d. data continue to be asymptotically normal. The results of large and finite sample investigations are applied to internet anomaly data.Item Open Access Links between climate feedbacks and the large-scale circulation across idealized and complex climate models(Colorado State University. Libraries, 2023) Davis, Luke L. B., author; Thompson, David W. J., advisor; Maloney, Eric, committee member; Randall, David, committee member; Pinaud, Olivier, committee member; Gerber, Edwin, committee memberThe circulation response to anthropogenic forcing is typically considered in one of two distinct frameworks: One that uses radiative forcings and feedbacks to investigate the thermodynamics of the response, and another that uses circulation feedbacks and thermodynamic constraints to investigate the dynamics of the response. In this thesis, I aim to help bridge the gap between these two frameworks by exploring direct links between climate feedbacks and the atmospheric circulation across ensembles of experiments from idealized and complex general circulation models (GCMs). I first demonstrate that an existing, widely-used type of idealized GCM — the dynamical core model — has climate feedbacks that are explicitly prescribed and determined by a single parameter: The thermal relaxation timescale. The dynamical core model may thus help to fill gaps in the model hierarchies commonly used to study climate forcings and climate feedbacks. I then perform two experiments: One that explores the influence of prescribed feedbacks on the unperturbed, climatological circulation; and a second that explores their influence on the circulation response to a horizontally uniform, global warming-like forcing perturbation. The results indicate that more stabilizing climate feedbacks are associated with 1) a more vigorous climatological circulation with increased thermal diffusivity, and 2) a weaker poleward displacement of the circulation in response to the global warming-like forcing. Importantly, since the most commonly-used relaxation timescale field resembles the real-world clear-sky feedback field, the uniform forcing perturbations produce realistic warming patterns, with amplified warming in the tropical upper troposphere and polar lower troposphere. The warming pattern and circulation response disappear when the relaxation timescale field is instead spatially uniform, demonstrating the critical role of spatially-varying feedback processes on shaping the response to anthropogenic forcing. I next explore circulation-feedback relationships in more complex GCMs using results from the most recent Coupled Model Intercomparison Projects (CMIP5 and CMIP6). Here, I estimate climate feedbacks by regressing top-of-atmosphere radiation against surface temperature for both 1) an unperturbed pre-industrial control experiment and 2) a perturbed global warming experiment forced by an abrupt quadrupling of CO2 concentrations. I find that across both ensembles, the cloud component of the perturbed climate feedback is closely related to the cloud component of the unperturbed climate feedback. Critically, the relationship is much stronger in CMIP6 than CMIP5, contrasting with many previously proposed constraints on the perturbation response. The relationship also explains the slow part of the CO2 response better than the fast, transient response. In general, the strength of the relationship depends on the degree to which the spatial pattern of the response resembles ENSO-dominated internal variability, with "El Niño-like" East Pacific warming and related tropical cloud changes. This is consistent with fluctuation-dissipation theory: Regions with stronger deep ocean heat exchange and weaker net feedbacks must always dominate both 1) internal fluctuations in the global energy budget, and 2) the slow part of the response to forcing perturbations. The stronger CMIP6 inter-model relationships are due to both an amplification of this mechanism and higher inter-model correlations between tropical cloud changes and extratropical cloud changes. Finally, I present emergent constraints on the slow response using a recent observational estimate of the unperturbed cloud feedback. I conclude by discussing some implications of these results. I consider how the relaxation feedback framework might be further developed and reconciled with traditional climate feedbacks to provide future research opportunities with climate model hierarchies.Item Open Access Methods for extremes of functional data(Colorado State University. Libraries, 2018) Xiong, Qian, author; Kokoszka, Piotr S., advisor; Cooley, Daniel, committee member; Pinaud, Olivier, committee member; Wang, Haonan, committee memberMotivated by the problem of extreme behavior of functional data, we develop statistical theory at the nexus of functional data analysis (FDA) and extreme value theory (EVT). A fundamental technique of functional data analysis is to replace infinite dimensional curves with finite dimensional representations in terms of functional principal components (FPCs). The coefficients of these projections, called the scores, encode the shapes of the curves. Therefore, the study of the extreme behavior of functional time series can be transformed to the study on functional principal component scores. We first derive two tests of significance of the slope function using functional principal components and their empirical counterparts (EFPC's). Applied to tropical storm data, these tests show a significant trend in the annual pattern of upper wind speed levels of hurricanes. Then we establish sufficient conditions under which the asymptotic extreme behavior of the multivariate estimated scores is the same as that of the population scores. We clarify these issues, including the rate of convergence, for Gaussian functions and for more general functional time series whose projections are in the Gumbel domain of attraction. Finally, we derive the asymptotic distribution of the sample covariance operator and of the sample functional principal components for functions which are regularly varying and whose fourth moment does not exist. The new theory is applied to establish the consistency of the regression operator in a functional linear model, with such errors.Item Open Access Modeling of optical waveguides with porous silica claddings and their use in LEAC sensors(Colorado State University. Libraries, 2014) Obeidat, Yusra Mahmoud, author; Lear, Kevin L., advisor; Pasricha, Sudeep, committee member; Pinaud, Olivier, committee memberIntegrated optical biosensors have many advantages such as low-cost, portability and the ability to detect multiple analytes on a single waveguide. They can be used in many important applications including biosensing applications. Previous research work focused on the issues of design, modeling and measurement of the local evanescent array coupled (LEAC) biosensor. The sensors were made using conventional dielectrics such as SiO2 and SiNx. The large increase in the complexity of the integrated circuits has increased the need for developing low-k dielectrics as new materials to cope with the integration challenges and improve operating speed. Furthermore, optical interconnects are required to be used to replace electrical interconnects in ICs to meet future goals. This increases the need for simultaneous manufacturing of electronics and optics on the same chip using a CMOS process. The research conducted during my Master of Science studies has addressed two important goals. The first was to use models to calculate surface and volume scattering losses in optical waveguides, especially, ones with porous silica claddings. The second goal was to use the simulation results to demonstrate the possibility of using porous silica in designing optical waveguides and LEAC sensors. By applying these models to porous silica optical waveguides described in previous publications, the agreement between their experimental results and the models results have been proved. Thus, these models can be used in the future to calculate the scattering losses in optical waveguides including ones with porous silica cladding. The main methods that are used to prepare porous silica and the models that are used to determine the effective index of porous silica have been discussed. A Matlab modesolver was used to simulate porous silica waveguides. Predictions for sensor sensitivity and waveguide loss as a function of waveguide dimension have been made using modesolver simulation results. The results demonstrate the ability to use porous silica in LEAC sensors in the future.Item Open Access Path planning for autonomous aerial vehicles using Monte Carlo tree search(Colorado State University. Libraries, 2024) Vasutapituks, Apichart, author; Chong, Edwin K. P., advisor; Azimi-Sadjadi, Mahmood, committee member; Pinaud, Olivier, committee member; Pezeshki, Ali, committee memberUnmanned aerial vehicles (UAVs), or drones, are widely used in civilian and defense applications, such as search and rescue operations, monitoring and surveillance, and aerial photography. This dissertation focuses on autonomous UAVs for tracking mobile ground targets. Our approach builds on optimization-based artificial intelligence for path planning by calculating approximately optimal trajectories. This approach poses a number of challenges, including the need to search over large solution spaces in real-time. To address these challenges, we adopt a technique involving a rapidly-exploring random tree (RRT) and Monte Carlo tree search (MCTS). The RRT technique increases in computational cost as we increase the number of mobile targets and the complexity of the dynamics. Our MCTS approach executes a tree search based on random sampling to generate trajectories in real time. We develop a variant of MCTS for online path-planning to track ground targets together with an associated algorithm called P-UAV. Our algorithm is based on the framework of partially observable Monte Carlo planning, originally developed in the context of MCTS for Markov decision processes. Our real-time approach exploits a parallel-computing strategy with a heuristic random-sampling process. In our framework, We explicitly incorporate threat evasion, obstacle collision avoidance, and resilience to wind. The approach embodies an exploration-exploitation tradeoff in seeking a near-optimal solution in spite of the huge search space. We provide simulation results to demonstrate the effectiveness of our path-planning method.Item Open Access Recovery of organ boundaries in electrical impedance tomography images using a priori data, optimization, and deep learning(Colorado State University. Libraries, 2019) Capps, Michael, author; Mueller, Jennifer, advisor; Cheney, Margaret, committee member; Pinaud, Olivier, committee member; Bartels, Randy, committee memberIn this thesis we explore electrical impedance tomography (EIT) and new aspects of the solutions to the inverse conductivity problem. Specifically we will focus on new methods for obtaining additional information from direct reconstructions on 2D domains using the D-bar method based on work by Nachmann in 1996 and Mueller and Siltanen in 2000. We cover the history of EIT as well as performing a review of relevant literature. Original work presented covers (1) an application of signal separation of cardiac and ventilation signals to the recovery of pulmonary measures and detection of air trapping in children with cystic fibrosis, (2) recovery of the boundaries of internal structures in EIT data sets using optimization of a priori data in the D-bar method, (3) recovery of the boundaries of internal structures in EIT data sets using deep neural networks applied to the scattering transform in the D-bar method. Results using both numerically simulated data and data collected on a tank with simulated organs made of agar are presented.Item Open Access Spaced-GNSS receiver techniques for ionospheric irregularity drift velocity and height estimation based on high-latitude GNSS scintillation(Colorado State University. Libraries, 2018) Wang, Jun, author; Morton, Y. Jade, advisor; Rino, Charles, committee member; Luo, J. Rockey, committee member; Pinaud, Olivier, committee memberSpaced-GNSS receiver measurements offer an inexpensive approach for remote-sensing the ionospheric irregularity drift velocity during ionospheric scintillations. Conventional approaches targeting equatorial amplitude scintillations are less applicable in high latitude regions where phase scintillations are more prominent. This dissertation demonstrates spaced-receiver techniques that use multi-GNSS carrier phase measurements to estimate irregularity drift velocity and effective irregularity height at high latitudes during scintillations. A time-domain method and a time-frequency domain method are implemented to extract time lag information between receiver pairs when observing the same irregularity structure. Based on the front velocity model and the anisotropy model, a hybrid correlation model is developed to account for the topology of the irregularity. From the time lag information, the hybrid correlation model and known satellite-receiver geometry, the irregularity drift velocity can be obtained. In addition, an inversion technique for estimating the effective height of the irregularity is developed based on the anisotropy model. These techniques are applied to data collected by two GNSS receiver-arrays at Gakona and the Poker Flat Research Range in Alaska. The GNSS-estimated drift velocities at Poker Flat are in general agreement with measurements from the co-located incoherent scatter radar and the All-sky Imager. The effective height estimates also compared favorably against the incoherent scatter radar measurements.Item Open Access Weighted ensemble: practical variance reduction techniques(Colorado State University. Libraries, 2022) Johnson, Mats S., author; Aristoff, David, advisor; Cheney, Margaret, committee member; Krapf, Diego, committee member; Pinaud, Olivier, committee memberComputational biology and chemistry is proliferated with important constants that are desirable for researchers. The mean-first-passage time (MFPT) is one such important quantity of interest and is pursued in molecular dynamics simulating protein conformational changes, enzyme reaction rates, and more. Often, the simulation of these processes is hindered by such events having prohibitively small probability of observation. For these rare-events, direct estimation by Monte Carlo techniques can be burdened by high variance. We analyzed an importance sampling splitting and killing algorithm called weighted ensemble to address these drawbacks. We used weighted ensemble in the context of a stochastic process governed by a Markov chain (Xt)t≥0 with steady state distribution μ to estimate the MFPT. Weighted ensemble works by partitioning the state space into bins and replicating trajectories in an advantageous and unbiased manner. By introducing a recycling boundary condition, we improved the convergence of our problem to steady state and made use of the Hill relation to estimate the MFPT. This change allows relevant conclusions to be drawn from simulations that are much shorter in time scale when compared to direct estimation of the MFPT. After defining the weighted ensemble algorithm, we decomposed the variance of the weighted ensemble estimator in a way that admits simple optimization problems to be posed. We also defined the relevant coordinate, the flux-discrepancy function, for splitting trajectories in the weighted ensemble method and its associated variance function. When combined with the variance formulas, the flux-discrepancy function was used to guide parameter choices for choosing binning and replication strategies for the weighted ensemble algorithm. Finally, we discuss practical implementations of solutions to the aforementioned optimization problems and demonstrate their effectiveness in the context of a toy problem. We found that the techniques we presented offered a significant variance reduction over a naive implementation of weighted ensemble that is commonly used in practice and direct simulation by naive Monte Carlo. The optimizations we presented correspond to a reduced computational cost for implementing the weighted ensemble algorithm. We further found that our results were applicable even in the case of limited resources which makes their application even more appealing.