Theses and Dissertations
Permanent URI for this collection
Browse
Recent Submissions
Item Open Access Full-wave and asymptotic computational electromagnetics methods: on their use and implementation in received signal strength, radar-cross-section, and uncertainty quantification predictions(Colorado State University. Libraries, 2024) Kasdorf, Stephen, author; Notaroš, Branislav M., advisor; Ilić, Milan, committee member; Wilson, Jesse, committee member; Venayagamoorthy, Karan, committee memberWe propose and evaluate several improvements to the accuracy of the shooting and bouncing rays (SBR) method for ray-tracing (RT) electromagnetic modeling. A per-ray cone angle calculation is introduced, with the maximum separation angle determined for each individual ray based on local neighbors, allowing the smallest theoretical error in SBR. This enables adaptive ray spawning and provides a unique analysis of the effect of ray cone sizes on accuracy. For conventional uniform angular distribution, we derive an optimal cone angle to further enhance accuracy. Both approaches are integrated with icosahedral ray spawning geometry and a double-counted ray removal technique, which avoids complex ray path searches. The results demonstrate that the advanced SBR method can perform wireless propagation modeling of tunnel environments with accuracy comparable to the image theory RT method, but with much greater efficiency. To further advance the efficiency of the SBR method, we propose a unified parallelization framework leveraging NVIDIA OptiX Prime programming interfaces on graphics processing units (GPUs). The framework achieves comprehensive parallelization of all components of the SBR algorithm, including traditionally sequential tasks like electric field computation and postprocessing. Through optimization of memory usage and GPU resources, the new SBR method achieves upwards of 99% parallelism under Amdahl's scaling law. This innovative parallelization yields dramatic speedups without sacrificing the previously enhanced accuracy of the SBR method, demonstrating an unparalleled level of computational efficiency for large-scale electromagnetic propagation simulations. Finally, we implement and validate several advanced Kriging methodologies for uncertainty quantification (UQ) in computational electromagnetics (CEM). The universal Kriging, Taylor Kriging, and gradient-enhanced Kriging methods are applied to reconstruct probability density functions, offering efficient alternatives to Monte Carlo simulations. We further propose the novel gradient-enhanced Taylor Kriging (GETK) method, which combines the advantages of gradient information and basis functions, yielding superior surrogate function accuracy and faster convergence. Numerical results using higher-order finite-element scattering modeling show that GETK dramatically outperforms other Kriging and non-Kriging methods in UQ problems, accurately predicting the impact of stochastic input parameters, such as material uncertainties, on quantities of interest like radar cross-section.Item Embargo IMSIS: an instrumented microphysiological system with integrated sensors for monitoring cellular metabolic activities(Colorado State University. Libraries, 2024) Cheng, Ming-Hao, author; Chen, Thomas W., advisor; Lear, Kevin, committee member; Wilson, Jesse W., committee member; Carnevale, Elaine, committee member; Chicco, Adam J., committee memberWell plates are widely used in biological experiments, particularly in pharmaceutical sciences and cell biology. Their popularity stems from their versatility to support a variety of fluorescent markers for high throughput monitoring of cellular activities. However, using fluorescent markers in traditional well plates has its own challenges, namely, they can be potentially toxic to cells, and thus, may perturb their biological functions; and it is difficult to monitor multiple analytes concurrently and in real-time inside each well. In this dissertation, an Instrumented Microphyiological System with Integrated Sensors (IMSIS) platform is presented. The IMSIS platform is supported by integrated bioelectronic circuits and a graphical user interface for easy user configuration and monitoring. The IMSIS platform currently incorporates O2, H2O2, and pH sensors inside each well, allowing up to six wells to perform concurrent non-destructive and label-free measurements in real-time. The system has integrated microfluidics to maintain its microphysiological environment within each well. The miniaturized design ensures portability, suitable for small offices and field applications. The IMSIS platform is equipped with a 14-bit ADC and read channel bioelectronics with the signal-to-noise ratio (SNR) of 79 dB, 112 dB, and 48 dB for measuring oxygen consumption rate (OCR), hydrogen peroxide production rate (HPR), and extracellular acidification rate (ECAR), respectively. Furthermore, the scalable design of the architecture allows easy expansion to accommodate a higher throughput in the future. A graphical user interface was developed to provide a dashboard control by users for system operation. The versatile platform supports electrochemical sensing techniques such as amperometry, chronoamperometry, and potentiometry, with a reference electrode voltage range of ±1 V. The IMSIS platform has been used to monitor the real-time metabolic activities of various biological samples, including bovine, equine, and human oocytes, bovine and equine embryos, as well as isolated mouse cardiac mitochondria. The IMSIS platform has successfully shown its capability to simultaneously measure OCR, ECAR, and HPR both in the sample's basal state and in response to external stimuli, such as oligomycin. The design of the IMSIS platform and the experimental results underscore its significant potential for diverse clinical and research applications. These include embryo quality assessment for assisted reproductive technology (ART), investigation of the effects of obesity-induced mitochondrial dysfunction, and analysis of cancer tumors and their metabolic response to therapeutics.Item Open Access Improvements to the tracking process(Colorado State University. Libraries, 2024) Lewis, Codie T., author; Cheney, Margaret, advisor; Chandrasekaran, Venkatacha, advisor; Crouse, David, committee member; Kirby, Michael, committee memberAccurate target tracking is a fundamental requirement of modern automated systems. An accurate tracker must correctly associate new observations to existing tracks and update those tracks to reflect the new information. An accurate tracker is one which predicts assignments and measurement distributions closely matching the ground truth. This work will show that aspects of the GNP algorithm and IMM filter require amendments and renewed investigation. To aid the framing of the solutions in the context of tracking, some general background will be presented first. More specific background will be given prior to the corresponding contributions. Modern sensor networks require the alignment of track pictures from multiple sensors (sometimes called sensor registration). This issue was described in the 1990s and termed the global nearest pattern problem in the early 2000s. The following work presents a correction and extension of the solution to the global nearest pattern problem with a heuristic error estimation algorithm. Its use for sensor calibration is demonstrated. Once measurements have been associated to tracks, there still remain several choices that define the tracking algorithm, one being the filtering algorithm which updates the track state. One common solution for filtering is the interacting multiple model filter which was originally developed in the 1980s. This is essentially a bank of Kalman filters which are weighted and mixed based on a predefined Markov chain. The validity of the assumptions on that Markov chain will be discussed and recommendation for replacing those assumptions with neural networks will be proposed and assessed. Finally, following association of two tracks for a single target, it is necessary to combine their information while respecting the lack of knowledge about correlations between the tracks. Covariance intersection was developed in the 1990s and 2000s for track-to-track fusion when tracks are assumed Gaussian. A generalization of covariance intersection, Chernoff fusion, was developed in the 2000s for handling general track states. A connection made in the literature which allows for direct analysis of the error of Chernoff fusion is used to evaluate the effectiveness of Fibonacci lattices for quasi-Monte Carlo integration solutions required by Chernoff fusion.Item Open Access Design and optimization of efficient, fault-tolerant and secure 2.5D chiplet systems(Colorado State University. Libraries, 2024) Taheri, Ebad, author; Nikdast, Mahdi, advisor; Pasricha, Sudeep, advisor; Malaiya, Yashwant K., committee member; Jayasumana, Anura P., committee memberIn response to the burgeoning demand for high-performance computing systems, this Ph.D. dissertation investigates the pivotal challenges surrounding Networks-on-Chip (NoCs) within the framework of 2.5D and 3D integration technologies, with a primary objective of enhancing the efficiency, fault tolerance, and security of forthcoming computing system architectures. The inherent limitations in bandwidth and reliability at the boundary of chiplets in 2.5D chiplet systems engender significant challenges in traffic management, latency, and energy efficiency. Furthermore, the interconnected global network on an interposer, linking multiple chiplets, necessitates high-bandwidth, low-latency communication to accommodate the substantial traffic generated by numerous cores across diverse chiplets. This Ph.D. dissertation emphasizes various design aspects of NoCs, such as latency, energy efficiency, fault tolerance, and security. It explores the design of 3D NoCs leveraging Through-Silicon Vias (TSVs) for vertical communication. To address reliability concerns and fabrication costs associated with high TSV density, Partially Connected 3D NoC (PC-3DNoC) has been proposed. An adaptive congestion-aware TSV link selection algorithm is introduced to manage traffic load and optimize communication, resulting in reduced latency and improved energy efficiency. For 2.5D chiplet systems, a novel deadlock-free and fault-tolerant routing algorithm is presented. The fault-tolerant algorithm enhances redundancy in vertical link selection and offers improved network reachability with reduced latency compared to existing solutions, even in the presence of faults. Furthermore, to address the energy consumption concerns of silicon-photonic-based 2.5D networks, a reconfigurable power-efficient and congestion-aware silicon-photonic-based 2.5D Interposer network is proposed. The proposed photonic interposer utilizes phase change materials (PCMs) for dynamic reconfiguration and power gating of the photonic network, leading to lower latency and improved energy efficiency. Additionally, the research investigates the integration of optical computation and communication into 2.5D chiplet platforms for domain-specific machine learning (ML) processing. This approach aims to overcome limitations in computation density and communication speeds faced by traditional accelerators, paving the way for sustainable and scalable ML hardware. Furthermore, this dissertation proposes a 2.5D chiplet-based architecture utilizing a silicon-photonic-based interposer, which tackles the limitations of conventional bus-based communication by employing a novel switch-based network, achieving significant energy efficiency improvements for high-bandwidth, low-latency data movement in machine learning accelerators. The switch-based network employs our proposed optical switch based on Mach--Zehnder Interferometer (MZI) devices with a dividing state to facilitate broadcast and optimize communication for ML workloads. Finally, the dissertation explores security considerations in 2.5D chiplet systems with diverse, potentially untrusted chiplets. To address this, a secure routing framework for Network-on-Interposer is presented. The proposed secure framework protects the system against distributed denial-of-service (DDoS) attacks by concealing predictable routing paths. It leverages multi-objective optimization to balance efficiency and reliability for the NoI. The proposed contributions in this dissertation help advance the field of chip-scale interconnection networks by proposing novel techniques for improved performance, reliability, and power efficiency in 3D and 2.5D NoC architectures. These advancements hold promise for the design of future high-performance computing systems, particularly in the areas of machine learning and other computationally intensive applications.Item Open Access Design exploration and optimization of silicon photonic integrated circuits under fabrication-process variations(Colorado State University. Libraries, 2024) Mirza, Asif Anwar Baig, author; Nikdast, Mahdi, advisor; Pasricha, Sudeep, advisor; Wilson, Jesse, committee member; Brewer, Samuel, committee memberSilicon photonic integrated circuits (PICs) have become a key solution to handle the growing demands of large data transmission in emerging applications by consuming less power and low heat dissipation while offering ultra-high data bandwidth than electronic circuits. With Moore's Law slowing down and the end of Dennard scaling, PICs offer a logical step to improve data movement and processing performance in future computing systems. On PICs, light is processed and routed by means of optical waveguides. Silicon has a unique feature of high refractive index contrast in the silicon-on-insulator (SOI) platform which allows for tight confinement of light in nanometer waveguide cores and bends with a radius of only a few microns. PICs comprise of a diverse set of elements such as waveguide splitters, combiners, crossings, and couplers which help with distribution, routing, and computation of optical signals. Optical signals are converted to electrical signals with the help of photodiodes which in silicon photonics are implemented using Germanium. To enable PICs for wavelength-division multiplexing (WDM), there is a need for efficient wavelength filters consisting of optical delay lines or resonators. Optical delay lines are usually built using Mach-Zehnder Interferometers (MZIs) which consists of a splitter, two waveguides with a given group delay, and a combiner. Other devices such as microring resonators (MRRs) can be used as wavelength filters when the input wavelength matches a whole multiple times in the circumference of the ring. Other components such as grating coupler help couple the light into and out of a PIC. PICs can be fabricated on the infrastructure developed for complimentary metal–oxide–semiconductor (CMOS) electronics. This technology now enables deep submicron features with unprecedented accuracy in large volumes along with close integration of photonics and electronic circuits. The use of silicon as a base material makes reuse of these manufacturing tools possible, but photonics imposes different demands on the processes. Although silicon photonics offers data transmission and computation at light speed with high bandwidth and low power consumption, the fundamental building blocks in PICs (e.g., optical waveguides) are extremely sensitive to nanometer-scale fabrication-process variations (FPVs) caused due to slight randomness in optical lithography processes. Active compensation by means of electronic circuits (a.k.a. tuning) is necessary to compensate for FPVs. Tunable microheaters can be used for active compensation which affect the material properties of silicon to improve PIC's performance under FPVs. However, the total power consumed due to tuning in a working PIC can be drastically high. For example, variations as small as 1 nm in an MRR can deviate the optical frequency response of the device by 2 nm that leads to approximately 25% increase in the tuning power consumption to compensate for variations of a single MRR. Additionally, a system can have thousands of such MRRs that can easily add up the total power consumption of the system. In order to address FPVs we need to observe the reliability not just at a system level but down to the device level by enabling reliable, FPV-aware devices to enable FPV-resilient PICs and photonic systems. Designing more reliable and FPV-tolerant photonic devices should not only help us with reducing the total power consumption but also build more reliable circuits with fault-free operational behavior for data transmission and computation in future computing systems. This PhD thesis covers the impact of process variations on photonic devices primarily MRRs. We take a bottom-up approach in improving the reliability of an MRR towards FPVs. We propose an improved and optimized MRR designs which can be used in any PIC to reduce the overall shift in resonant wavelength of the device due to FPVs, further reducing the total power consumption required to tune the device. We confirmed our findings by further fabricating such MRRs and comparing the improved and optimized designs against conventional MRRs. Furthermore, we study the impact these improved MRRs have in photonic artificial intelligence (AI) accelerators and how they can further improve the network accuracy and overall power consumption. Finally, we also compile our work into a device exploration tool that allows photonic designer to set design parameters in an MRR and study its behavior under different FPV profiles. With this tool we aim to give the designer the ability to determine desired MRR designs based on desired design and performance requirements and budget constraints set on a photonic system.Item Open Access Engineering a silicon- photonic bimodal biosensor(Colorado State University. Libraries, 2024) Mohammad, Ahmed, author; Nikdast, Mahdi, advisor; Lear, Kevin, advisor; Kipper, Matthew, committee memberBiosensors are powerful analytical devices that integrate biological sensing elements with physicochemical transducers to detect and quantify specific analytes, offering wide-ranging applications in fields such as medical diagnostics, environmental monitoring, food safety, and drug discovery. Bimodal waveguide (BiMW) biosensors, an interferometric optical biosensor, proven to be one of the best optical biosensors based on their high sensitivity, real time detection and compact design. During its early development stages, early 2010's, the height of the bimodal waveguide was increased to induce interference between the fundamental and first-order modes. Later, in late 2010's, change in the width of the bimodal waveguide were introduced to induce this interference. Our novel design builds upon these advancements, focusing on optimizing some parameters, mainly the width of the bimodal biosensor, to enhance performance and sensitivity. Many attempts were simulated to get a high fringe visibility and to determine the reduction in the transmission monitor was due to reduce the input power or the change in the effective index in the sensing region. Then, we came out with a design with one input, to maximize the fringe visibility, and two output, to determine the source power fluctuation. Multiple changes in the parameters, such as the width and the offset of the input waveguide, were investigated. In addition, change in the width of the bimodal waveguide was also included in this experiment. Finally, we varied the gap between the two output bends. All these parameters were varied to get a higher fringe visibility and lead to better sensitivity. Moreover, we discovered that this design requires the sample to be placed on top of the bimodal waveguide, rather than on the sides. We concluded that the best design we can extract is the one with 120 rad/RIU cm.Item Open Access Deep learning for short-term prediction of wildfire using geostationary satellite observations(Colorado State University. Libraries, 2024) Saqer, Yousef, author; Chen, Haonan, advisor; Azimi-Sadjadi, Mahmood R., committee member; Wei, Yu, committee memberThe aim of this thesis is to utilize the Geostationary Operational Environmental Satellite (GOES) data for predictions regarding the intensity and potential path of wildfires. Using GOES to identify wildfires and extracting data from those events to help train a deep learning model. Three fires were selected for training the deep learning model: the Sequoia, Calwood, and Maui fires. The GOES data of the fires was obtained from band 7 which operates in the Shortwave Window or 3.9μm wavelength, band 7 is able to capture hotspots which is beneficial for wildfire prediction. The radiance data from band 7 is pulled from an Amazon Web Service (AWS) and becomes part of a dataset of 2513 samples. The data is then stacked to form a time series of approximately two hours and converted into a compressed h5 file. The pipeline distributes the dataset by taking in twenty five minutes of input data and feeding four different models to predict seventy five minutes, one hundred minutes, and one hundred and twenty five minutes of data. The data is then fed into a deep learning model utilizing a model known as Self Attention Gated Recurrent Unit (SaGRU). The SaGRU is tested four times, once for predicting seventy five minutes, once for predicting one hundred minutes, and twice for one hundred and twenty five minutes. The models were then compared against each other regarding Mean Squared Error (MSE) and Mean Absolute Error (MAE) along with the Normalized Mean Squared Error (NME) and the Normalized Mean Absolute Error (NMAE). Each metric was taken along multiple thresholds comparing the performance when hotspots are present and when hotspots are absent. The resultant showed that regardless of the sequence length, there was minimal negative impact on early predictions, but as the predicted sequence increased significant loss could be seen on the later predicted frames.Item Embargo Analysis of LEAC biosensor for scalable manufacturing using BPM and FDTD simulation methods(Colorado State University. Libraries, 2024) Holmes, Cameron Dane, author; Lear, Kevin L., advisor; Nikdast, Mahdi, committee member; Kipper, Matt, committee memberThe increasing demand for rapid, scalable, and accurate diagnostic tools has driven the development of optical biosensing technologies. LEAC (Local Evanescent Array-Coupled) biosensors, which leverage the evanescent field generated by optical waveguides, are particularly well-suited for applications in biomedical diagnostics, environmental monitoring, and point-of-care testing. LEAC biosensors have previously been fabricated in incomplete and unoptimized near-commercial CMOS processes and fully custom processes in a university cleanroom but have not been implemented in suitable high-volume processes such as commercial silicon photonics. A primary motivation for the research presented in this thesis is to evaluate the ability to fabricate LEAC biosensors operating at 1550 nm wavelengths in the commercial AIM Photonics' active silicon photonics process. This thesis presents a comprehensive tolerance analysis of LEAC sensors for both bulk sample layers (400 nm thick) and protein monolayers (10 nm thick) in AIM's process, focusing on the impact of variations in key design parameters—specifically waveguide core thickness, cladding layers, and photodetector placement—on sensor sensitivity. Beam Propagation Method (BPM) and Finite-Difference Time-Domain (FDTD) simulation techniques are employed to assess how these tolerances affect optical field propagation, power dissipation, and flux into the photodetector, serving as proxies for sensor performance. Additionally, the study examines crosstalk between multiple sensing regions, evaluating how refractive index variations in one region influence adjacent regions—an important consideration for multi-region sensors. Results show that sensor sensitivity increases with cladding thickness and decreases with waveguide core thickness. A 25 nm manufacturing error in core thickness resulted in less than a 10% sensitivity shift, and a 300 nm cladding thickness error had a similarly small effect. Resonant absorption between the core and photodetector was observed across both bulk and monolayer samples. Sensitivity depends heavily on proximity to resonance; a 10% error in photodetector thickness at resonance caused a 600% change in sensitivity, while off-resonance, the same error had minimal impact. Coupled Mode Theory (CMT) explained these energy transfers and power fluctuations. ANOVA analysis of full-device FDTD simulations quantified forward crosstalk due to modulated absorption from sample regions closer to the optical source (upstream). Forward crosstalk was found to be negligible for protein monolayer samples but could be significant in bulk samples. However, even in bulk samples, forward crosstalk was largely mitigated using photocurrent ratios with a reference region. A crosstalk ration was used as a metric to determine the influence of each refractive index (n1, n3) on the photocurrent ratio. In the forward crosstalk direction, the use of photocurrent ratios decreased the magnitude of the forward crosstalk ratio; however, the use of photocurrents inherently introduce dependance on downstream indices (reverse crosstalk). Reverse crosstalk, caused by reflections at the dielectric boundary between sensing regions, was found to be negligible using photocurrent ratios with bulk analytes; however, with monolayers, the use of photocurrent ratios introduced a slight dependence on the downstream region, indicating minor backward crosstalk. This can be mitigated by using raw current values rather than current ratios. Raw currents eliminate backward crosstalk in region 1, while photocurrent ratios effectively eliminate forward crosstalk in region 3.Item Open Access High-rate GNSS satellite clock estimation: implications for radio occultation bending angle precision(Colorado State University. Libraries, 2024) Ko, Yao-Chun, author; Chen, Haonan, advisor; Yao, Jian, advisor; Chiu, Christine, committee memberThe Global Navigation Satellite System (GNSS) radio occultation (RO) technique plays a vital role in collecting data for meteorological and space weather prediction. It is exemplified by the COSMIC-2 low-Earth-orbit (LEO) satellite constellation, which collects the GNSS signals from an elevation angle of 90° to below the horizon. Those GNSS observation data above 5° elevation angle are used for the precise orbit determination of satellites, while those GNSS observation data below 5° are used for the RO processing. A key part of the RO processing is to estimate the bending angle due to the atmospheric refraction, which requires an accurate information of the positions and clock offsets of both the transmitter (i.e., GNSS satellite) and the receiver (i.e., COSMIC-2 satellite). Previous research at University Corporation for Atmospheric Research (UCAR) [1] indicates a notable reduction in the intrinsic uncertainty of GLONASS radio occultation when employing higher-rate GNSS satellite clock products (e.g., from 30-second sampling interval to 2-second sampling interval). However, that work only analyzed one day of dataset. To analyze multiple days of dataset, I have developed a software program that can automatically generate high-rate GNSS clock products by using a GNSS toolkit called GINAN [2]. This program is also important to the future UCAR's RO postprocessing and near-real-time processing. To be specific, it first downloads, merges, and decimates 1-second GNSS-receiver data from 50 worldwide ground stations, and then runs the GINAN software to generate clock products. I have validated the clock products generated by the program by comparing to International GNSS Service (IGS) analysis centers' clock products – the standard deviation of the time difference between our clock products and the clock products published by the Center for Orbit Determination in Europe is as small as ~ 0.1 nanoseconds. Using one week of 2-sec clock products generated by the program, I have run the standard RO processing and found that the bending-angle uncertainty of the GLONASS RO has been reduced by ~ 34%, as compared to if using the existing 30-sec clock products. Admittedly, there is no obvious improvement for the GPS RO because the GPS satellite clocks are stable at a short term of <= 30 seconds. By pushing down the noise of the RO technique, we can possibly observe the atmosphere at an unprecedented precision which could benefit the research of atmosphere modelling, the operation of weather monitoring and forecast, and even the study of space weather.Item Embargo Fusion of observations from C-band polarimetric radar and S-band profiler radar during a convective storm(Colorado State University. Libraries, 2024) Adubi, Tunde Habibullah, author; Venkatachalam, Chandrasekar, advisor; Cheney, Margaret, committee member; Popat, Ketul, committee memberThis study discusses a procedure to measure and correct attenuation of the radar signals caused by the presence of partially melted ice hydrometeors (graupels and hails) in convective storms by utilizing simultaneous observations from a C-Band dual-polarization scanning radar and a vertically pointing S-Band profiler radar. The C-Band radar, used in this study, is known as the Atmospheric Radar for Meteorological and Operational Research (ARMOR) radar, situated in Huntsville, Alabama. Also, the S-Band profiler radar is maintained by the NOAA Physical Sciences Laboratory (PSL) and is about 50 kilometers west of the ARMOR radar site. A convective storm event characterized by squall lines is investigated. Within the squall line region, the presence of partially melted ice particles led to significant attenuation of the radar signal at C-Band, resulting in reduced reflectivity (Z). To address this issue, a conventional attenuation correction approach based on differential propagation phase measurements for rain medium was applied and compared with measurements from the S-Band profiler. The analysis revealed that correcting for rain attenuation alone was insufficient to address the heightened attenuation caused by melting ice hydrometeors. Consequently, a new attenuation correction methodology was developed, accounting for melting ice hydrometeors. Initially, profiles of specific differential propagation phase (Kdp) were studied to identify the exact location (range gates) containing melting ice particles. An attenuation correction coefficient for melting ice hydrometeors was estimated, and a piecewise attenuation correction procedure was implemented to address regions of rain and melting ice hydrometeors separately. Validation of the new attenuation correction technique involved simultaneous comparison of vertical reflectivity profiles obtained from the C-Band and S-Band profiler radar. Both instruments were matched spatially and temporally due to different viewing geometry. The results demonstrate that the new approach significantly enhanced the correlation between profiler measurements and attenuation-corrected radar reflectivity from the C-Band radar. Overall, this thesis determines experimentally, the attenuation coefficient in melting ice that is not available much in the literature today.Item Open Access Rotor position synchronization control methods in central-converter multi-machine architectures with application to aerospace electrification(Colorado State University. Libraries, 2024) Lima, Cláudio de Andrade, author; Cale, James, advisor; Chong, Edwin, committee member; Herber, Daniel, committee member; Kirby, Michael, committee memberWith the continuous advancement of the aerospace industry, there has been a significant shift towards More Electric Aircraft (MEA). Some of the advantages of the electrification of some actuation systems in an aircraft include lower weight --- hence, lower fuel consumption, --- robustness, flexibility, ease of integration, and higher availability of sensors to achieve better diagnostics of the system. One cannot ignore the challenges of the electrification process, which encompasses finding appropriate hardware architectures, and control schemes, and obtaining at least the same reliability as traditional drives. The thrust reverser actuation system (TRAS), which acts during landing to reduce the necessary runway for the aircraft to fully decelerate, has a big potential to be replaced by an electromechanical version, the so-called EM-TRAS. Among the different hardware architectures, the central-converter multi-machine (CCMM) stands out for employing a single power converter that drives multiple machines in parallel, saving weight and room usage inside the aircraft. This solution comes with its challenges related to the requirement of ensuring position synchronization among all the machines, even under potentially unbalanced mechanical loads. Since there is only one central converter, all the machines are subject to its common output, limiting the control independence of each machine. Moreover, the lack of position synchronization among the machines can cause harmful stresses to the mechanical structure of the EM-TRAS. This work proposes a solution for position synchronization under CCMM architectures, for aerospace applications. The proposed method utilizes three-phase external and variable resistors connected in series with each of the machines, which increases the degrees of freedom (DOF) to control independently each machine under different demands. Mathematical modeling for the different components of the system is presented, from which the proposed solution is derived. Numerical simulations are used to show the working capabilities of the external resistor method. The performance of the position synchronization is enhanced via H-infinity control design methods. Hardware experiments are also presented, obtained from an experimental testbed that was partially designed and constructed during this work. Both numerical and experimental results are in agreement. Initial findings show that the method is promising and works well under some operating conditions. However, some limitations of the method are presented, such as the unstable operation under negative loads. An alternative position synchronization method for CCMM systems is proposed at the end of this work. The method is based on independently controlled induced voltages on each machine's power cables through low-power auxiliary converters and three-phase compact transformers, resulting in independent terminal voltages applied to each machine. This work describes the method and validates it through numerical simulations. Initial findings show that the method overcomes some of the limitations of the external resistors method, while keeping -- and, in some cases, improving -- the overall performance in terms of convergence time and peak position error.Item Open Access Hardware-software codesign of silicon photonic AI accelerators(Colorado State University. Libraries, 2024) Sunny, Febin P., author; Pasricha, Sudeep, advisor; Nikdast, Mahdi, advisor; Chen, Haonen, committee member; Malaiya, Yashwant K., committee memberMachine learning applications have become increasingly prevalent over the past decade across many real-world use cases, from smart consumer electronics to automotive, healthcare, cybersecurity, and language processing. This prevalence has been fueled by the emergence of powerful machine learning models, such as Deep Neural Networks (DNNs), Convolutional Neural Networks (CNNs), and Recurrent Neural Networks (RNNs). As researchers explore deeper models with higher connectivity, the computing power and the memory requirement necessary to train and utilize them also increase. Such increasing complexity also necessitates that the underlying hardware platform should consistently deliver better performance while satisfying strict power constraints. Unfortunately, the limited performance-per-watt in today's computing platforms – such as general-purpose CPUs, GPUs, and electronic neural network (NN) accelerators – creates significant challenges for the growth of new deep learning and AI applications. These electronic computing platforms face fundamental limits in the post-Moore Law era due to increased ohmic losses and capacitance-induced latencies in interconnects, as well as power inefficiencies and reliability concerns that reduce yields and increase costs with semiconductor-technology scaling. A solution to improving performance-per-watt for AI model processing is to explore more efficient hardware NN accelerator platforms. Silicon photonics has shown promise in terms of achievable energy efficiency and latency for data transfers. It is also possible to use photonic components to perform computation, e.g., matrix-vector multiplication. Such photonics-based AI accelerators can not only address the fan-in and fan-out problem with linear algebra processors, but their operational bandwidth can approach the photodetection rate (typically in the hundreds of GHz), which is orders of magnitude higher than electronic systems today that operate at a clock rate of a few GHz. A solution to the data-movement bottleneck can be the use of silicon photonics technology for photonic networks-on-chip (PNoCs), which can enable ultra-high bandwidth, low latency, and energy-efficient communication. However, to ensure reliable, efficient, and high throughput communication and computation using photonics, several challenges must be addressed first. Photonic computation is performed in the analog domain, which makes it susceptible to various noise sources and drives down the achievable resolution for representing NN model parameters. To increase the reliability of silicon photonic AI accelerators, fabrication-process variation (FPV), which is the change in physical dimensions and characteristics of devices due to imperfections in fabrication, must be addressed. FPVs induce resonant wavelength shifts that need to be compensated, for the microring resonators (MRs), which are the fundamental devices to realize photonic computation and communication in our proposed accelerator architectures, to operate correctly. Without this correction, FPVs will cause increased crosstalk and data corruption during photonic communication and can also lead to errors during photonic computation. Accordingly, the correction for FPVs is an essential part of reliable computation in silicon photonic-based AI accelerators. Even with FPV-resilient silicon photonic devices, the tuning latency incurred by thermo-optic (TO) tuning and the thermal crosstalk it can induce are significant. The latency, which can be in the microsecond range, impacts the overall throughput of the accelerator and the thermal crosstalk impacts its reliable operation. At the architectural level it is also necessary to ensure that the NN processing is done efficiently while making use of the photonic resources in terms of wavelengths, and NN model-aware decisions in terms of device deployment, arrangement, and multiply and accumulate (MAC) unit design have to be performed. To address these challenges, the major contributions of this thesis are focused on proposing a hardware-software co-design framework to enable high throughput, low latency, and energy-efficient AI acceleration across various neural network models, using silicon photonics. At the architectural level, we have proposed wavelength reuse schemes, vector decomposition, and NN-aware MAC unit designs for increased efficiency in laser power consumption. In terms of NN-aware designs, we have proposed layer-specific acceleration units, photonic batch normalization folding, and fine-grained sparse NN acceleration units. To tackle the reliability challenges introduced by FPV, we have performed device-level design-space exploration and optimization to design MRs that are more tolerant to FPVs than the state-of-the-art efforts in this area. We also adapt Thermal Eigen-mode decomposition and have devised various novel techniques to manage thermal and spectral crosstalk sources, allowing our silicon photonic-based AI accelerators to reach up to 16-bit parameter resolution per MR, which enables high accuracy for most NN models.Item Open Access Improving radar quantitative precipitation estimation through optimizing radar scan strategy and deep learning(Colorado State University. Libraries, 2024) Wang, Liangwei, author; Chen, Haonan, advisor; Chandrasekaran, Venkatchalam, committee member; Wang, Haonan, committee memberAs radar technology plays a crucial role in various applications, including weather forecasting and military surveillance, understanding the impact of different radar scan elevation angles is paramount to optimize radar performance and enhance its effectiveness. The elevation angle, which refers to the vertical angle at which the radar beam is directed, significantly influences the radar's ability to detect, track, and identify targets. The effect of different elevation angles on radar performance depends on factors such as radar type, operating environment, and target characteristics. To illustrate the impact of lowering the minimum scan elevation angle on surface rainfall mapping, this article focuses on the KMUX WSR-88D radar in Northern California as an example, within the context of the National Weather Service's efforts to upgrade its operational Weather Surveillance Radar. By establishing polarimetric radar rainfall relations using local disdrometer data, the study aims to estimate surface rainfall from radar observations, with a specific emphasis on shallow orographic precipitation. The findings indicate that a lower scan elevation angle yields superior performance, with a significant 16.1% improvement in the normalized standard error and a 19.5% enhancement in the Pearson correlation coefficient, particularly for long distances from the radar. In addition, conventional approaches to radar rainfall estimation have limitations, recent studies have demonstrated that deep learning techniques can mitigate parameterization errors and enhance precipitation estimation accuracy. However, training a model that can be applied to a broad domain poses a challenge. To address this, the study leverages crowdsourced data from NOAA and SFL, employing a convolutional neural network with a residual block to transfer knowledge learned from one location to other domains characterized by different precipitation properties. The experimental results showcase the efficacy of this approach, highlighting its superiority over conventional fixed-parameter rainfall algorithms. Machine learning methods have shown promising potential in improving the accuracy of quantitative precipitation estimation (QPE), which is critical in hydrology and meteorology. While significant progress has been made in applying machine learning to QPE, there is still ample room for further research and development. Future endeavors in machine learning-based QPE will primarily focus on enhancing model accuracy, reliability, and interpretability while considering practical operational applications in hydrology and meteorology.Item Open Access Path planning for autonomous aerial vehicles using Monte Carlo tree search(Colorado State University. Libraries, 2024) Vasutapituks, Apichart, author; Chong, Edwin K. P., advisor; Azimi-Sadjadi, Mahmood, committee member; Pinaud, Olivier, committee member; Pezeshki, Ali, committee memberUnmanned aerial vehicles (UAVs), or drones, are widely used in civilian and defense applications, such as search and rescue operations, monitoring and surveillance, and aerial photography. This dissertation focuses on autonomous UAVs for tracking mobile ground targets. Our approach builds on optimization-based artificial intelligence for path planning by calculating approximately optimal trajectories. This approach poses a number of challenges, including the need to search over large solution spaces in real-time. To address these challenges, we adopt a technique involving a rapidly-exploring random tree (RRT) and Monte Carlo tree search (MCTS). The RRT technique increases in computational cost as we increase the number of mobile targets and the complexity of the dynamics. Our MCTS approach executes a tree search based on random sampling to generate trajectories in real time. We develop a variant of MCTS for online path-planning to track ground targets together with an associated algorithm called P-UAV. Our algorithm is based on the framework of partially observable Monte Carlo planning, originally developed in the context of MCTS for Markov decision processes. Our real-time approach exploits a parallel-computing strategy with a heuristic random-sampling process. In our framework, We explicitly incorporate threat evasion, obstacle collision avoidance, and resilience to wind. The approach embodies an exploration-exploitation tradeoff in seeking a near-optimal solution in spite of the huge search space. We provide simulation results to demonstrate the effectiveness of our path-planning method.Item Embargo Transient phase microscopy using balanced-detection temporal interferometry and a compact piezoelectric microscope design with sparse inpainting(Colorado State University. Libraries, 2024) Coleal, Cameron N., author; Wilson, Jesse, advisor; Bartels, Randy, committee member; Levinger, Nancy, committee member; Adams, Henry, committee memberTransient phase detection, which measures the Re{∆N }, is the complement to transient absorption detection (Im{∆N }). This work extends transient phase detection from spectroscopy to microscopy using a fast-galvanometer point-scanning setup and compares the trade-offs in transient phase versus transient absorption microscopy for the same pump and probe wavelengths. The realization of transient phase microscopy in conjunction with transient absorption microscopy opens a new door to measure the excited-state kinetics with phase-based or absorption-based techniques; depending on the sample and the wavelengths in use, transient phase detection may provide a signal improvement over transient absorption. Up until this point, transient phase microscopy has been a neglected technique in ultrafast pump-probe imaging applications. Additionally, this work evaluates a miniature piezoelectric actuator to replace galvanometers in a compact point-scanning microscope design. Sparsity limitations present in the design are addressed by the construction of a Fourier-projections based inpainting algorithm which could enable faster imaging acquisition in future applications.Item Open Access Investigation on the structural, mechanical and optical properties of amorphous oxide thin films for gravitational wave detectors(Colorado State University. Libraries, 2024) Castro Lucas, Samuel, author; Menoni, Carmen, advisor; Rocca, Jorge, committee member; Sambur, Justin, committee memberAmorphous oxide thin films grown through physical vapor deposition methods like ion beam sputtering, play a crucial role in optical interference coatings for high finesse optical cavities, such as those used in gravitational wave detectors. The stability of these atomically disordered solids is significantly influenced by both deposition conditions and composition. Consequently, these enable the tuning of structural, mechanical, or optical properties. The sensitivity of current gravitational wave interferometric detectors at the frequency range of around 100 Hz is currently limited by a combination of quantum and coating thermal noise (CTN). CTN is associated with thermally driven random displacement fluctuations in the high reflectance amorphous oxide coatings of the end-test masses in the interferometer. These fluctuations cause internal friction, acting as an anelastic relaxation mechanism by dissipating elastic energy. The dissipated internal elastic energy can be quantified through the mechanical loss angle (Q-1). These unwanted fluctuations associated with mechanical loss can be reduced through modifications of the atomic network in the amorphous oxides. Specifically, the combination of two or more metal cations in a mixed amorphous thin film and post-deposition annealing are known to favorably impact the network organization and hence reduce internal friction. The first study of this thesis reports on the structural modifications between amorphous TiO2 with GeO2 and with SiO2. High-index materials for gravitational wave detectors such as amorphous TiO2:GeO2 (44% Ti), have been found to exhibit low mechanical loss post-annealing at 600°C. Reaffirming annealing to be a major contributor to reducing mechanical loss this thesis examines: a) cation interdiffusion between amorphous oxides of TiO2 with GeO2 and with SiO2 and b) the modifications to the structural properties, both after annealing. The annealing temperature, at which this interdiffusion mechanism occurs, is key for pinpointing structural rearrangements that are favorable for reducing internal friction. Furthermore, to determine whether diffusion occurs into SiO2 after annealing is also important, given that the multi-layer mirrors of gravitational wave detectors utilize SiO2 as a low-index layer. The study of cation interdiffusion used nanolaminates of TiO2, SiO2 and GeO2 to identify cation diffusion across the interface. The results show Ge and Ti cation interfacial diffusion, at temperatures above 500°C. Instead, Si cations diffuse into TiO2 at a temperature around 850°C and Ti into SiO2 at around 950°C. These temperatures correspond to an average of 0.8 of the glass transition temperature (Tg), with Tg=606°C for GeO2 and Tg=1187°C for SiO2. These findings support previous research by our group in amorphous GeO2, which showed that elevated temperature deposition and annealing at 0.8 Tg, leads to favorable organization of the atomic network which is associated with low mechanical loss. The second study of this thesis investigates the structural, mechanical, and optical properties of amorphous ternary oxide mixtures following post-annealing. These mixtures consist of TiO2:GeO2 combined with SiO2 and ZrO2, as well as TiO2:SiO2 combined with ZrO2. Candidate high index layers, such as amorphous TiO2:GeO2 (44% Ti), and TiO2:SiO2 (69.5% Ti) exhibit low mechanical loss after post-annealing at 600°C, and 850°C, respectively. The inclusion of a third metal cation is shown to delay the onset of crystallization to temperatures around 800°C. The addition of a third metal cation also modifies the residual stress of the ternary compared to the binary materials. There is an indication of densification when annealing past 600°C. The reduction in residual tensile stress, combined with the higher crystallization temperature of the ternary mixtures, present attractive properties. These properties will expand the parameter space for post-deposition processing, mainly of the TiO2:GeO2 -based mixtures, to further reduce mechanical loss. This advancement paves the way for amorphous oxide coatings for gravitational wave detectors with lower mechanical loss, aligning with plans for future detectors.Item Embargo A microphysiological system for studying barrier health of live tissues in real time(Colorado State University. Libraries, 2024) Way, Ryan, author; Chen, Thomas W., advisor; Wilson, Jesse, committee member; Chicco, Adam, committee memberEpithelial cells create barriers that protect many different components in the body from their external environment. The gut in particular carries bacteria and other infectious agents. A healthy gut epithelial barrier prevents unwanted substances from accessing the underlying lamina propria while maintaining the ability to digest and absorb nutrients. Increased gut barrier permeability, better known as leaky gut, has been linked to several chronic inflammatory diseases. Yet understanding the cause of leaky gut and developing effective interventions are still elusive due to the lack of tools to maintain tissue's physiological environment while elucidating cellular functions under various stimuli ex vivo. This thesis presents a microphysiological system capable of recording real-time barrier permeability of mouse gut tissues in a realistic physiological environment over extended durations. Key components of the microphysiological system include a microfluidic chamber designed to hold the live tissue explant and create a sufficient microphysiological environment to maintain tissue viability; proper media composition that preserves a microbiome and creates necessary oxygen gradients across the barrier; integrated sensor electrodes and supporting electronics for acquiring and calculating transepithelial electrical resistance (TEER); and a scalable system architecture to allow multiple chambers running in parallel for increased throughput. The experimental results demonstrate that the system can maintain tissue viability for up to 72 hours. The results also show that the custom-built and integrated TEER sensors are sufficiently sensitive to distinguish differing levels of barrier permeability when treated with collagenase and low pH media compared to control. Permeability variations in tissue explants from different positions in the intestinal tract were also investigated using TEER revealing their disparities in permeability. Finally, the results also quantitatively determine the effect of the muscle layer on total epithelial resistance.Item Open Access Air pollutant source estimation from sensor networks(Colorado State University. Libraries, 2024) Thakur, Tanmay, author; Lear, Kevin, advisor; Pezeshki, Ali, committee member; Carter, Ellison, committee memberA computationally efficient model for the estimation of unknown source parameters using the Gaussian plume model, linear least square optimization, and gradient descent is presented in this work. This thesis discusses results for simulations of a two-dimensional field using advection-diffusion equations underlining the benefits of plume solutions when compared to other methods. The Gaussian plume spread for pollutant concentrations has been studied in this work and modeled in Matlab to estimate the pollutant concentration at various wireless sensor locations. To set up the model simulations, we created a field in Matlab with several pollutant-measuring sensors and one or two pollutant-emitting sources. The forward model estimated the concentration measured at the sensors when the sources emit the pollutants. These pollutants were programmed in Matlab to follow Gaussian plume equations while spreading. The initial work estimated the concentration of the pollutants with varying sensor noise, wind speed, and wind angles. The varying noise affects the sensors' readings whereas the wind speed and wind angle affect the plume shape. The forward results are then applied to solving the inverse problem to determine the possible sources and pollutant emission rates in the presence of additive white Gaussian noise (AWGN). A vector of possible sources within a region of interest is minimized using L2 minimization and gradient descent methods. Initially, the input to the inverse model is random a guess for the source location coordinates. Then, initial values for the source emission rates are calculated using the linear least squares method since the sensor readings are proportional to the source emission rates. The accuracy of this model is calculated by comparing the predicted source locations with the true locations of the sources. The cost function reaches a minimum value when the predicted sensor concentrations are close to the true concentration values. The model continues to minimize the cost function until it remains fairly constant. The inverse model is initially developed for a single source and later developed for two sources. Different configurations for the number of sources and locations of the sensors are considered in the inverse model to evaluate the accuracy. After verifying the inverse algorithm with synthetic data, we then used the algorithm to estimate the source of pollution with real air pollution sensor data collected by Purple Air sensors. For this problem, we extracted data from Purpleair.com from 4 sensors around the Woolsey forest fire area in California in 2018 and used its data as input to the inverse model. The predictions suggested the source was located close to the true high-intensity forest fire in that area. Later, we apply a neural network method to estimate the source parameters and compare estimates of the neural network with the results from the inverse problem using the physical model for the synthetic data. The neural vii model uses sequential neural network techniques for training, testing, and predicting the source parameters. The model was trained with sensor concentration readings, source locations, wind speeds, wind angles, and corresponding source emission rates. The model was tested using the testing data set to compare the predictions with the true source locations and emission rates. The training and testing data were subjected to feature engineering practices to improve the model's accuracy. To improve the accuracy of the model different configurations of activation functions, batch size, and epoch size were used. The neural network model was able to obtain an accuracy above 90% in predicting the source emission rates and source locations. This accuracy varied depending upon the type of configuration used such as single source, multiple sources, number of sensors, noise levels, wind speed, and wind angle used. In the presence of sensor noise, the neural network model was more accurate than the physical inverse model in predicting the source location based on a comparison of R2 scores for fitting the predicted source location to the true source location. Further work on this model's accuracy will help the development of a real-time air quality wireless sensor network application with automatic pollutant source detection.Item Open Access Effects of background winds and temperature on bores, strong wind shears and concentric gravity waves in the mesopause region(Colorado State University. Libraries, 2009) Yue, Jia, author; She, Chiao-Yao, advisor; Reising, Steven C., advisorUsing data from the CSU sodium lidar and Kyoto University OH airglow imager at Fort Collins, CO, this thesis provides a comprehensive, though qualitative, understanding for three different yet related observed fluid-dynamical phenomena in the mesopause region. The first project involves the convection-excited gravity waves observed in the OH airglow layer at 87 km. Case study on May 11, 2004 is discussed in detail along with statistical studies and a ray-tracing modeling. A single convection source matches the center of the concentric gravity waves. The horizontal wavelengths and periods of these gravity waves were measured as functions of both radius and time. The weak mean background wind between the lower and middle atmosphere determines the penetration of the gravity waves into higher altitude. The second project involves mesospheric bores observed by the same OH imager. The observation on October 9, 2007 suggests that when a large-amplitude gravity wave is trapped in a thermal duct, its wave front could steepen and forms bore-like structure in the mesopause. In turn, the large gravity wave and its bore may significantly impact the background. Statistical study reveals the possible link between the jet/front system in the lower atmosphere and the large-scale gravity waves and associated bores in the mesopause region. The third project involves the relationship between large wind shear generation and sustainment and convective/dynamic stabilities measured by the sodium lidar at the altitude of 80-105 km during 2002-2005. The correlation between wind shear, S, and Brunt-Vaisala frequency, N suggests that the maximum sustainable wind shear is determined by the necessary condition for dynamic instability of Richardson number, leading to the result that the maximal wind shear occurs at altitudes of lower thermosphere where the atmosphere is convectively very stable. The dominate source for sustainable large windshears appears to be the semidiurnal tidal-period perturbations with shorter vertical wavelengths and greater amplitude.Item Open Access Characterization of integrated optical waveguide devices(Colorado State University. Libraries, 2008) Yuan, Guangwei, authorAt the Optoelectronics Research Lab in ECE at CSU, we explore the issues of design, modeling and measurement of integrated optical waveguide devices of interest, such as optical waveguide biosensors and on-chip optical interconnects. A local evanescent-field array coupled (LEAC) sensor was designed to meet the needs for low-trace biological detection without florescent chemical agent aids. The measurement of LEACs sensor requires the aid of either a commercial near-field scanning optical microscope (NSOM) or new proposed buried detector arrays. LEAC sensors were first used to detect pseudo-adlayers on the waveguide top surface. These adlayers include SiNx and photoresist. The field modulation that was obtained based on NSOM measurement was approximately 80% for a 17 nm SiNx adlayer that was patterned on the waveguide using plasma reactive ion etching. Later, single and multiple regions of immunoassay complex adlayers were analyzed using NSOM. The most recent results demonstrated the capability of using this sensor to differentiate immunoassay complex regions with different surface coverage ratio. The study on buried detectors revealed a higher sensitivity of the sensor to a thin organic film on the waveguide. By detecting the optical intensity decay rate, the sensor was able to detect several nanometer thick film with 1.7 dB/mm/nm sensitivity. In bulk material analysis, this sensor demonstrated more than 15 dB/mm absorption coefficient difference between organic oil and air upper claddings. In on-chip optical interconnect research, optical waveguide test structures and leaky-mode waveguide coupled photodetectors were designed, modeled and measured. A 16-node H-tree waveguide was used to deliver light into photodetectors and characterized. Photodetectors at each end node of the H-tree were measured using near-field scanning microscopy. The 0.5 micrometer wide photodetector demonstrated up to 80% absorption ratio over just a 10 micrometer length. This absorption efficiency is the highest among reported leaky-mode waveguide coupled photodetectors. The responsivity and quantum efficiency of this photodetector are 0.35 A/W and 65%, respectively.