Theses and Dissertations
Permanent URI for this collection
Browse
Recent Submissions
Item Open Access Rotor position synchronization control methods in central-converter multi-machine architectures with application to aerospace electrification(Colorado State University. Libraries, 2024) Lima, Cláudio de Andrade, author; Cale, James, advisor; Chong, Edwin, committee member; Herber, Daniel, committee member; Kirby, Michael, committee memberWith the continuous advancement of the aerospace industry, there has been a significant shift towards More Electric Aircraft (MEA). Some of the advantages of the electrification of some actuation systems in an aircraft include lower weight --- hence, lower fuel consumption, --- robustness, flexibility, ease of integration, and higher availability of sensors to achieve better diagnostics of the system. One cannot ignore the challenges of the electrification process, which encompasses finding appropriate hardware architectures, and control schemes, and obtaining at least the same reliability as traditional drives. The thrust reverser actuation system (TRAS), which acts during landing to reduce the necessary runway for the aircraft to fully decelerate, has a big potential to be replaced by an electromechanical version, the so-called EM-TRAS. Among the different hardware architectures, the central-converter multi-machine (CCMM) stands out for employing a single power converter that drives multiple machines in parallel, saving weight and room usage inside the aircraft. This solution comes with its challenges related to the requirement of ensuring position synchronization among all the machines, even under potentially unbalanced mechanical loads. Since there is only one central converter, all the machines are subject to its common output, limiting the control independence of each machine. Moreover, the lack of position synchronization among the machines can cause harmful stresses to the mechanical structure of the EM-TRAS. This work proposes a solution for position synchronization under CCMM architectures, for aerospace applications. The proposed method utilizes three-phase external and variable resistors connected in series with each of the machines, which increases the degrees of freedom (DOF) to control independently each machine under different demands. Mathematical modeling for the different components of the system is presented, from which the proposed solution is derived. Numerical simulations are used to show the working capabilities of the external resistor method. The performance of the position synchronization is enhanced via H-infinity control design methods. Hardware experiments are also presented, obtained from an experimental testbed that was partially designed and constructed during this work. Both numerical and experimental results are in agreement. Initial findings show that the method is promising and works well under some operating conditions. However, some limitations of the method are presented, such as the unstable operation under negative loads. An alternative position synchronization method for CCMM systems is proposed at the end of this work. The method is based on independently controlled induced voltages on each machine's power cables through low-power auxiliary converters and three-phase compact transformers, resulting in independent terminal voltages applied to each machine. This work describes the method and validates it through numerical simulations. Initial findings show that the method overcomes some of the limitations of the external resistors method, while keeping -- and, in some cases, improving -- the overall performance in terms of convergence time and peak position error.Item Open Access Hardware-software codesign of silicon photonic AI accelerators(Colorado State University. Libraries, 2024) Sunny, Febin P., author; Pasricha, Sudeep, advisor; Nikdast, Mahdi, advisor; Chen, Haonen, committee member; Malaiya, Yashwant K., committee memberMachine learning applications have become increasingly prevalent over the past decade across many real-world use cases, from smart consumer electronics to automotive, healthcare, cybersecurity, and language processing. This prevalence has been fueled by the emergence of powerful machine learning models, such as Deep Neural Networks (DNNs), Convolutional Neural Networks (CNNs), and Recurrent Neural Networks (RNNs). As researchers explore deeper models with higher connectivity, the computing power and the memory requirement necessary to train and utilize them also increase. Such increasing complexity also necessitates that the underlying hardware platform should consistently deliver better performance while satisfying strict power constraints. Unfortunately, the limited performance-per-watt in today's computing platforms – such as general-purpose CPUs, GPUs, and electronic neural network (NN) accelerators – creates significant challenges for the growth of new deep learning and AI applications. These electronic computing platforms face fundamental limits in the post-Moore Law era due to increased ohmic losses and capacitance-induced latencies in interconnects, as well as power inefficiencies and reliability concerns that reduce yields and increase costs with semiconductor-technology scaling. A solution to improving performance-per-watt for AI model processing is to explore more efficient hardware NN accelerator platforms. Silicon photonics has shown promise in terms of achievable energy efficiency and latency for data transfers. It is also possible to use photonic components to perform computation, e.g., matrix-vector multiplication. Such photonics-based AI accelerators can not only address the fan-in and fan-out problem with linear algebra processors, but their operational bandwidth can approach the photodetection rate (typically in the hundreds of GHz), which is orders of magnitude higher than electronic systems today that operate at a clock rate of a few GHz. A solution to the data-movement bottleneck can be the use of silicon photonics technology for photonic networks-on-chip (PNoCs), which can enable ultra-high bandwidth, low latency, and energy-efficient communication. However, to ensure reliable, efficient, and high throughput communication and computation using photonics, several challenges must be addressed first. Photonic computation is performed in the analog domain, which makes it susceptible to various noise sources and drives down the achievable resolution for representing NN model parameters. To increase the reliability of silicon photonic AI accelerators, fabrication-process variation (FPV), which is the change in physical dimensions and characteristics of devices due to imperfections in fabrication, must be addressed. FPVs induce resonant wavelength shifts that need to be compensated, for the microring resonators (MRs), which are the fundamental devices to realize photonic computation and communication in our proposed accelerator architectures, to operate correctly. Without this correction, FPVs will cause increased crosstalk and data corruption during photonic communication and can also lead to errors during photonic computation. Accordingly, the correction for FPVs is an essential part of reliable computation in silicon photonic-based AI accelerators. Even with FPV-resilient silicon photonic devices, the tuning latency incurred by thermo-optic (TO) tuning and the thermal crosstalk it can induce are significant. The latency, which can be in the microsecond range, impacts the overall throughput of the accelerator and the thermal crosstalk impacts its reliable operation. At the architectural level it is also necessary to ensure that the NN processing is done efficiently while making use of the photonic resources in terms of wavelengths, and NN model-aware decisions in terms of device deployment, arrangement, and multiply and accumulate (MAC) unit design have to be performed. To address these challenges, the major contributions of this thesis are focused on proposing a hardware-software co-design framework to enable high throughput, low latency, and energy-efficient AI acceleration across various neural network models, using silicon photonics. At the architectural level, we have proposed wavelength reuse schemes, vector decomposition, and NN-aware MAC unit designs for increased efficiency in laser power consumption. In terms of NN-aware designs, we have proposed layer-specific acceleration units, photonic batch normalization folding, and fine-grained sparse NN acceleration units. To tackle the reliability challenges introduced by FPV, we have performed device-level design-space exploration and optimization to design MRs that are more tolerant to FPVs than the state-of-the-art efforts in this area. We also adapt Thermal Eigen-mode decomposition and have devised various novel techniques to manage thermal and spectral crosstalk sources, allowing our silicon photonic-based AI accelerators to reach up to 16-bit parameter resolution per MR, which enables high accuracy for most NN models.Item Open Access Improving radar quantitative precipitation estimation through optimizing radar scan strategy and deep learning(Colorado State University. Libraries, 2024) Wang, Liangwei, author; Chen, Haonan, advisor; Chandrasekaran, Venkatchalam, committee member; Wang, Haonan, committee memberAs radar technology plays a crucial role in various applications, including weather forecasting and military surveillance, understanding the impact of different radar scan elevation angles is paramount to optimize radar performance and enhance its effectiveness. The elevation angle, which refers to the vertical angle at which the radar beam is directed, significantly influences the radar's ability to detect, track, and identify targets. The effect of different elevation angles on radar performance depends on factors such as radar type, operating environment, and target characteristics. To illustrate the impact of lowering the minimum scan elevation angle on surface rainfall mapping, this article focuses on the KMUX WSR-88D radar in Northern California as an example, within the context of the National Weather Service's efforts to upgrade its operational Weather Surveillance Radar. By establishing polarimetric radar rainfall relations using local disdrometer data, the study aims to estimate surface rainfall from radar observations, with a specific emphasis on shallow orographic precipitation. The findings indicate that a lower scan elevation angle yields superior performance, with a significant 16.1% improvement in the normalized standard error and a 19.5% enhancement in the Pearson correlation coefficient, particularly for long distances from the radar. In addition, conventional approaches to radar rainfall estimation have limitations, recent studies have demonstrated that deep learning techniques can mitigate parameterization errors and enhance precipitation estimation accuracy. However, training a model that can be applied to a broad domain poses a challenge. To address this, the study leverages crowdsourced data from NOAA and SFL, employing a convolutional neural network with a residual block to transfer knowledge learned from one location to other domains characterized by different precipitation properties. The experimental results showcase the efficacy of this approach, highlighting its superiority over conventional fixed-parameter rainfall algorithms. Machine learning methods have shown promising potential in improving the accuracy of quantitative precipitation estimation (QPE), which is critical in hydrology and meteorology. While significant progress has been made in applying machine learning to QPE, there is still ample room for further research and development. Future endeavors in machine learning-based QPE will primarily focus on enhancing model accuracy, reliability, and interpretability while considering practical operational applications in hydrology and meteorology.Item Open Access Path planning for autonomous aerial vehicles using Monte Carlo tree search(Colorado State University. Libraries, 2024) Vasutapituks, Apichart, author; Chong, Edwin K. P., advisor; Azimi-Sadjadi, Mahmood, committee member; Pinaud, Olivier, committee member; Pezeshki, Ali, committee memberUnmanned aerial vehicles (UAVs), or drones, are widely used in civilian and defense applications, such as search and rescue operations, monitoring and surveillance, and aerial photography. This dissertation focuses on autonomous UAVs for tracking mobile ground targets. Our approach builds on optimization-based artificial intelligence for path planning by calculating approximately optimal trajectories. This approach poses a number of challenges, including the need to search over large solution spaces in real-time. To address these challenges, we adopt a technique involving a rapidly-exploring random tree (RRT) and Monte Carlo tree search (MCTS). The RRT technique increases in computational cost as we increase the number of mobile targets and the complexity of the dynamics. Our MCTS approach executes a tree search based on random sampling to generate trajectories in real time. We develop a variant of MCTS for online path-planning to track ground targets together with an associated algorithm called P-UAV. Our algorithm is based on the framework of partially observable Monte Carlo planning, originally developed in the context of MCTS for Markov decision processes. Our real-time approach exploits a parallel-computing strategy with a heuristic random-sampling process. In our framework, We explicitly incorporate threat evasion, obstacle collision avoidance, and resilience to wind. The approach embodies an exploration-exploitation tradeoff in seeking a near-optimal solution in spite of the huge search space. We provide simulation results to demonstrate the effectiveness of our path-planning method.Item Embargo Transient phase microscopy using balanced-detection temporal interferometry and a compact piezoelectric microscope design with sparse inpainting(Colorado State University. Libraries, 2024) Coleal, Cameron N., author; Wilson, Jesse, advisor; Bartels, Randy, committee member; Levinger, Nancy, committee member; Adams, Henry, committee memberTransient phase detection, which measures the Re{∆N }, is the complement to transient absorption detection (Im{∆N }). This work extends transient phase detection from spectroscopy to microscopy using a fast-galvanometer point-scanning setup and compares the trade-offs in transient phase versus transient absorption microscopy for the same pump and probe wavelengths. The realization of transient phase microscopy in conjunction with transient absorption microscopy opens a new door to measure the excited-state kinetics with phase-based or absorption-based techniques; depending on the sample and the wavelengths in use, transient phase detection may provide a signal improvement over transient absorption. Up until this point, transient phase microscopy has been a neglected technique in ultrafast pump-probe imaging applications. Additionally, this work evaluates a miniature piezoelectric actuator to replace galvanometers in a compact point-scanning microscope design. Sparsity limitations present in the design are addressed by the construction of a Fourier-projections based inpainting algorithm which could enable faster imaging acquisition in future applications.Item Open Access Investigation on the structural, mechanical and optical properties of amorphous oxide thin films for gravitational wave detectors(Colorado State University. Libraries, 2024) Castro Lucas, Samuel, author; Menoni, Carmen, advisor; Rocca, Jorge, committee member; Sambur, Justin, committee memberAmorphous oxide thin films grown through physical vapor deposition methods like ion beam sputtering, play a crucial role in optical interference coatings for high finesse optical cavities, such as those used in gravitational wave detectors. The stability of these atomically disordered solids is significantly influenced by both deposition conditions and composition. Consequently, these enable the tuning of structural, mechanical, or optical properties. The sensitivity of current gravitational wave interferometric detectors at the frequency range of around 100 Hz is currently limited by a combination of quantum and coating thermal noise (CTN). CTN is associated with thermally driven random displacement fluctuations in the high reflectance amorphous oxide coatings of the end-test masses in the interferometer. These fluctuations cause internal friction, acting as an anelastic relaxation mechanism by dissipating elastic energy. The dissipated internal elastic energy can be quantified through the mechanical loss angle (Q-1). These unwanted fluctuations associated with mechanical loss can be reduced through modifications of the atomic network in the amorphous oxides. Specifically, the combination of two or more metal cations in a mixed amorphous thin film and post-deposition annealing are known to favorably impact the network organization and hence reduce internal friction. The first study of this thesis reports on the structural modifications between amorphous TiO2 with GeO2 and with SiO2. High-index materials for gravitational wave detectors such as amorphous TiO2:GeO2 (44% Ti), have been found to exhibit low mechanical loss post-annealing at 600°C. Reaffirming annealing to be a major contributor to reducing mechanical loss this thesis examines: a) cation interdiffusion between amorphous oxides of TiO2 with GeO2 and with SiO2 and b) the modifications to the structural properties, both after annealing. The annealing temperature, at which this interdiffusion mechanism occurs, is key for pinpointing structural rearrangements that are favorable for reducing internal friction. Furthermore, to determine whether diffusion occurs into SiO2 after annealing is also important, given that the multi-layer mirrors of gravitational wave detectors utilize SiO2 as a low-index layer. The study of cation interdiffusion used nanolaminates of TiO2, SiO2 and GeO2 to identify cation diffusion across the interface. The results show Ge and Ti cation interfacial diffusion, at temperatures above 500°C. Instead, Si cations diffuse into TiO2 at a temperature around 850°C and Ti into SiO2 at around 950°C. These temperatures correspond to an average of 0.8 of the glass transition temperature (Tg), with Tg=606°C for GeO2 and Tg=1187°C for SiO2. These findings support previous research by our group in amorphous GeO2, which showed that elevated temperature deposition and annealing at 0.8 Tg, leads to favorable organization of the atomic network which is associated with low mechanical loss. The second study of this thesis investigates the structural, mechanical, and optical properties of amorphous ternary oxide mixtures following post-annealing. These mixtures consist of TiO2:GeO2 combined with SiO2 and ZrO2, as well as TiO2:SiO2 combined with ZrO2. Candidate high index layers, such as amorphous TiO2:GeO2 (44% Ti), and TiO2:SiO2 (69.5% Ti) exhibit low mechanical loss after post-annealing at 600°C, and 850°C, respectively. The inclusion of a third metal cation is shown to delay the onset of crystallization to temperatures around 800°C. The addition of a third metal cation also modifies the residual stress of the ternary compared to the binary materials. There is an indication of densification when annealing past 600°C. The reduction in residual tensile stress, combined with the higher crystallization temperature of the ternary mixtures, present attractive properties. These properties will expand the parameter space for post-deposition processing, mainly of the TiO2:GeO2 -based mixtures, to further reduce mechanical loss. This advancement paves the way for amorphous oxide coatings for gravitational wave detectors with lower mechanical loss, aligning with plans for future detectors.Item Embargo A microphysiological system for studying barrier health of live tissues in real time(Colorado State University. Libraries, 2024) Way, Ryan, author; Chen, Thomas W., advisor; Wilson, Jesse, committee member; Chicco, Adam, committee memberEpithelial cells create barriers that protect many different components in the body from their external environment. The gut in particular carries bacteria and other infectious agents. A healthy gut epithelial barrier prevents unwanted substances from accessing the underlying lamina propria while maintaining the ability to digest and absorb nutrients. Increased gut barrier permeability, better known as leaky gut, has been linked to several chronic inflammatory diseases. Yet understanding the cause of leaky gut and developing effective interventions are still elusive due to the lack of tools to maintain tissue's physiological environment while elucidating cellular functions under various stimuli ex vivo. This thesis presents a microphysiological system capable of recording real-time barrier permeability of mouse gut tissues in a realistic physiological environment over extended durations. Key components of the microphysiological system include a microfluidic chamber designed to hold the live tissue explant and create a sufficient microphysiological environment to maintain tissue viability; proper media composition that preserves a microbiome and creates necessary oxygen gradients across the barrier; integrated sensor electrodes and supporting electronics for acquiring and calculating transepithelial electrical resistance (TEER); and a scalable system architecture to allow multiple chambers running in parallel for increased throughput. The experimental results demonstrate that the system can maintain tissue viability for up to 72 hours. The results also show that the custom-built and integrated TEER sensors are sufficiently sensitive to distinguish differing levels of barrier permeability when treated with collagenase and low pH media compared to control. Permeability variations in tissue explants from different positions in the intestinal tract were also investigated using TEER revealing their disparities in permeability. Finally, the results also quantitatively determine the effect of the muscle layer on total epithelial resistance.Item Open Access Air pollutant source estimation from sensor networks(Colorado State University. Libraries, 2024) Thakur, Tanmay, author; Lear, Kevin, advisor; Pezeshki, Ali, committee member; Carter, Ellison, committee memberA computationally efficient model for the estimation of unknown source parameters using the Gaussian plume model, linear least square optimization, and gradient descent is presented in this work. This thesis discusses results for simulations of a two-dimensional field using advection-diffusion equations underlining the benefits of plume solutions when compared to other methods. The Gaussian plume spread for pollutant concentrations has been studied in this work and modeled in Matlab to estimate the pollutant concentration at various wireless sensor locations. To set up the model simulations, we created a field in Matlab with several pollutant-measuring sensors and one or two pollutant-emitting sources. The forward model estimated the concentration measured at the sensors when the sources emit the pollutants. These pollutants were programmed in Matlab to follow Gaussian plume equations while spreading. The initial work estimated the concentration of the pollutants with varying sensor noise, wind speed, and wind angles. The varying noise affects the sensors' readings whereas the wind speed and wind angle affect the plume shape. The forward results are then applied to solving the inverse problem to determine the possible sources and pollutant emission rates in the presence of additive white Gaussian noise (AWGN). A vector of possible sources within a region of interest is minimized using L2 minimization and gradient descent methods. Initially, the input to the inverse model is random a guess for the source location coordinates. Then, initial values for the source emission rates are calculated using the linear least squares method since the sensor readings are proportional to the source emission rates. The accuracy of this model is calculated by comparing the predicted source locations with the true locations of the sources. The cost function reaches a minimum value when the predicted sensor concentrations are close to the true concentration values. The model continues to minimize the cost function until it remains fairly constant. The inverse model is initially developed for a single source and later developed for two sources. Different configurations for the number of sources and locations of the sensors are considered in the inverse model to evaluate the accuracy. After verifying the inverse algorithm with synthetic data, we then used the algorithm to estimate the source of pollution with real air pollution sensor data collected by Purple Air sensors. For this problem, we extracted data from Purpleair.com from 4 sensors around the Woolsey forest fire area in California in 2018 and used its data as input to the inverse model. The predictions suggested the source was located close to the true high-intensity forest fire in that area. Later, we apply a neural network method to estimate the source parameters and compare estimates of the neural network with the results from the inverse problem using the physical model for the synthetic data. The neural vii model uses sequential neural network techniques for training, testing, and predicting the source parameters. The model was trained with sensor concentration readings, source locations, wind speeds, wind angles, and corresponding source emission rates. The model was tested using the testing data set to compare the predictions with the true source locations and emission rates. The training and testing data were subjected to feature engineering practices to improve the model's accuracy. To improve the accuracy of the model different configurations of activation functions, batch size, and epoch size were used. The neural network model was able to obtain an accuracy above 90% in predicting the source emission rates and source locations. This accuracy varied depending upon the type of configuration used such as single source, multiple sources, number of sensors, noise levels, wind speed, and wind angle used. In the presence of sensor noise, the neural network model was more accurate than the physical inverse model in predicting the source location based on a comparison of R2 scores for fitting the predicted source location to the true source location. Further work on this model's accuracy will help the development of a real-time air quality wireless sensor network application with automatic pollutant source detection.Item Open Access Effects of background winds and temperature on bores, strong wind shears and concentric gravity waves in the mesopause region(Colorado State University. Libraries, 2009) Yue, Jia, author; She, Chiao-Yao, advisor; Reising, Steven C., advisorUsing data from the CSU sodium lidar and Kyoto University OH airglow imager at Fort Collins, CO, this thesis provides a comprehensive, though qualitative, understanding for three different yet related observed fluid-dynamical phenomena in the mesopause region. The first project involves the convection-excited gravity waves observed in the OH airglow layer at 87 km. Case study on May 11, 2004 is discussed in detail along with statistical studies and a ray-tracing modeling. A single convection source matches the center of the concentric gravity waves. The horizontal wavelengths and periods of these gravity waves were measured as functions of both radius and time. The weak mean background wind between the lower and middle atmosphere determines the penetration of the gravity waves into higher altitude. The second project involves mesospheric bores observed by the same OH imager. The observation on October 9, 2007 suggests that when a large-amplitude gravity wave is trapped in a thermal duct, its wave front could steepen and forms bore-like structure in the mesopause. In turn, the large gravity wave and its bore may significantly impact the background. Statistical study reveals the possible link between the jet/front system in the lower atmosphere and the large-scale gravity waves and associated bores in the mesopause region. The third project involves the relationship between large wind shear generation and sustainment and convective/dynamic stabilities measured by the sodium lidar at the altitude of 80-105 km during 2002-2005. The correlation between wind shear, S, and Brunt-Vaisala frequency, N suggests that the maximum sustainable wind shear is determined by the necessary condition for dynamic instability of Richardson number, leading to the result that the maximal wind shear occurs at altitudes of lower thermosphere where the atmosphere is convectively very stable. The dominate source for sustainable large windshears appears to be the semidiurnal tidal-period perturbations with shorter vertical wavelengths and greater amplitude.Item Open Access Characterization of integrated optical waveguide devices(Colorado State University. Libraries, 2008) Yuan, Guangwei, authorAt the Optoelectronics Research Lab in ECE at CSU, we explore the issues of design, modeling and measurement of integrated optical waveguide devices of interest, such as optical waveguide biosensors and on-chip optical interconnects. A local evanescent-field array coupled (LEAC) sensor was designed to meet the needs for low-trace biological detection without florescent chemical agent aids. The measurement of LEACs sensor requires the aid of either a commercial near-field scanning optical microscope (NSOM) or new proposed buried detector arrays. LEAC sensors were first used to detect pseudo-adlayers on the waveguide top surface. These adlayers include SiNx and photoresist. The field modulation that was obtained based on NSOM measurement was approximately 80% for a 17 nm SiNx adlayer that was patterned on the waveguide using plasma reactive ion etching. Later, single and multiple regions of immunoassay complex adlayers were analyzed using NSOM. The most recent results demonstrated the capability of using this sensor to differentiate immunoassay complex regions with different surface coverage ratio. The study on buried detectors revealed a higher sensitivity of the sensor to a thin organic film on the waveguide. By detecting the optical intensity decay rate, the sensor was able to detect several nanometer thick film with 1.7 dB/mm/nm sensitivity. In bulk material analysis, this sensor demonstrated more than 15 dB/mm absorption coefficient difference between organic oil and air upper claddings. In on-chip optical interconnect research, optical waveguide test structures and leaky-mode waveguide coupled photodetectors were designed, modeled and measured. A 16-node H-tree waveguide was used to deliver light into photodetectors and characterized. Photodetectors at each end node of the H-tree were measured using near-field scanning microscopy. The 0.5 micrometer wide photodetector demonstrated up to 80% absorption ratio over just a 10 micrometer length. This absorption efficiency is the highest among reported leaky-mode waveguide coupled photodetectors. The responsivity and quantum efficiency of this photodetector are 0.35 A/W and 65%, respectively.Item Open Access Applications of extreme ultraviolet compact lasers to nanopatterning and high resolution holographic imaging(Colorado State University. Libraries, 2008) Wachulak, Przemyslaw Wojciech, author; Marconi, Mario C., advisorThis dissertation describes two applications of extreme ultraviolet light in nanotechnology. Using radiation with a wavelength in the extreme ultraviolet (EUV) range allows to reach scales much smaller than with a conventional visible illumination. The first part of this dissertation describes a series of experiments that allowed the patterning at nanometer scales with sub-100nm resolution. Two types of photoresists (positive tone - PMMA and negative tone - HSQ) were patterned over the areas up to a few mm2 with features as small as 45nm using the interferometric lithography approach, reaching resolution equivalent to the wavelength of the illumination - 46.9nm. For the nanopatterning experiments two types of interferometers were studied in detail: Lloyd's mirror configuration and an amplitude division interferometer. Both approaches are presented and their advantages and drawbacks are discussed. The second part of the dissertation focuses on holographic imaging with ultimate resolution approaching the wavelength of the illumination. Different experiments were performed using Gabor's in-line holographic configuration and its capabilities in the EUV region were discussed. Holographic imaging was performed with different objects: AFM probes, spherical markers and carbon nanotubes. The holograms were stored in a high resolution recording medium - photoresist, digitized with an atomic force microscope and numerically reconstructed using a code based on the Fresnel propagator algorithm achieving in the reconstructed images the ultimate wavelength resolution. The resolution for the carbon nano-tubes images was assessed by two independent measurements: the knife-edge test resulting 45.5nm and an algorithm based on the correlation between the reconstructed image and a set of templates with variable resolution obtained by successive Gaussian filtering. This analysis yielded a resolution ~46nm. A similar algorithm that allowed for the simultaneous assessment of the resolution and the size of the features was used in EUV microscopy images confirming the validity and robustness of the code. A very fast, non-recursive reconstruction algorithm based on fast Fourier transform allowed for three dimensional surface reconstruction of the hologram performed by optical numerical sectioning, with a lateral resolution ~200nm and depth resolution ~2μm.Item Open Access Towards emulation of large-scale IP networks using end-to-end packet delay characteristics(Colorado State University. Libraries, 2008) Vivanco, Daniel A., author; Jayasumana, Anura, advisorNetwork emulation combines concepts from network simulation and measurements and provides a emulated network testbed over which application and protocols can be tested. Existing network emulators are not scalable due to the limitations of available computer hardware infrastructure and the reliance on one-to-one packet mapping and modeling scheme. This research proposes a measurement-based modeling methodology for the design of a network-in-a-box emulator. Methodology aims at overcoming the limitation of computational overhead and end-to-end network system characterization. A framework for large scale IP network emulation, named Overall Trend Replicating Network Emulator Tool (OTRENET), is presented. OTRENET intercepts data packet streams and modify them, based on network system models, in real-time. The complexity and overhead of packet-by-packet mapping and modeling, while producing results consistent with measurements is achieved by a traffic sampling algorithm. Such algorithm monitors traffic metrics in a per-packet level, to dynamically separate it into frames. A comprehensive study of end-to-end packet delay dynamics, in the context of network system modeling, is presented. Theoretical basis, techniques and measurements for network packet delay dynamics characterization and modeling for various sending rate conditions and network stages have been developed. Goodness-of-fit results demonstrate the modeling accuracy for both packet delay and IPG processes for cases where sending bit rate is relatively small compared to the link capacity. However, as the sending bit rate increases, as a fraction of the bandwidth, IPG becomes a better alternative for network system modeling. A novel approach for online modeling end-to-end packet delay dynamics is proposed to address non-stationarity of network systems. Proposed methodology models and captures the network system characteristics taking into account the non-stationarity of the packet delay samples. In general, results presented show that analyzing packet delay processes by modeling the segmented traces yield a better understanding of the network system dynamics.Item Open Access Rapid early design space exploration using legacy design data, technology scaling trend and in-situ macro models(Colorado State University. Libraries, 2009) Thangaraj, Charles V. K., author; Chen, Tom, advisorCMOS technology scaling trend, i.e. the doubling of the operating frequency and the doubling of the number of transistors on a die every eighteen months, also know as Moore's Law has been a fundamental driver for the semiconductor industry for well over three decades. Scaling CMOS technologies into deep sub micron especially into sub 100 nm dimensions have caused a significant shift in business and design philosophy, and methodology. In addition to the semiconductor industry maturation there are seven key disruptive trends impacting the semiconductor industry. They are competitive landscape changes, technology convergence, greater global connectedness, increased design complexity, commoditization, consumerization, and the soaring research, development and engineering costs. These disruptions have made traditional business models increasingly ineffective and the benefits of Moore's Law insufficient for sustained competitiveness [1]. 'More-than-Moore' approach to heterogeneous system integration and holistic system optimization strategies in addition to the benefits of technology scaling are necessary for future success [2] [3].Item Open Access Robust resource allocation in heterogeneous parallel and distributed computing systems(Colorado State University. Libraries, 2008) Smith, James T., II, author; Siegel, H. J., advisor; Maciejewski, A. A., advisorIn a heterogeneous distributed computing environment, it is often advantageous to allocate system resources in a manner that optimizes a given system performance measure. However, this optimization is often dependent on system parameters whose values are subject to uncertainty. Thus, an important research problem arises when system resources must be allocated given uncertainty in system parameters. Robustness can be defined as the degree to which a system can function correctly in the presence of parameter values different from those assumed. In this research, we define mathematical models of robustness in both static and dynamic stochastic environments. In addition, we model dynamic environments where estimates of system parameter values are provided as point estimates where these estimates are known to deviate substantially from their actual values. The main contributions of this research are (1) mathematical models of robustness suitable for dynamic environments based on single estimates of system parameters (2) a mathematical model of robustness applicable to environments where the uncertainty in system parameters can be modeled stochastically, (3) a demonstration of the use of this metric to design resource allocation heuristics in a static environment, (4) a mathematical model of robustness in a stochastic dynamic environment, (5) we demonstrate the utility of this dynamic robustness metric through the design of resource allocation heuristics, (6) the derivation of a robustness metric for evaluating resource allocation decisions in an overlay network along with a near optimal resource allocation technique suitable to this environment.Item Open Access Robust resource-allocation methods for QOS-constrained parallel and distributed computing systems(Colorado State University. Libraries, 2008) Shestak, Valdimir, author; Maciejewski, A. A., advisor; Siegel, Howard Jay, advisorThis research investigates the problem of robust resource allocation for distributed computing systems operating under imposed Quality of Service (QoS) constraints. Often, such systems are expected to function in a physical environment replete with uncertainty, which causes the amount of processing required over time to fluctuate substantially. In the first two studies, we show how an effective resource allocation can be achieved in the heterogeneous shipboard distributed computing system and IBM cluster based imaging system. The general form for a stochastic robustness metric is then presented based on a mathematical model where the relationship between uncertainty in system parameters and its impact on system performance are described stochastically. The utility of the established metric is exploited in the design of optimization techniques based on greedy and iterative approaches that address the problem of resource allocation in a large class of distributed systems operating on periodically updated data sets. One of the major reasons for possible QoS violations in distributed systems is a loss of resources, frequently caused by abnormal operating conditions. One aspect that makes a resource allocation problem extremely challenging in such systems is a random nature of resource failures and recoveries. The last study presented in this work describes a solution method that was developed for this case based on the concepts of the Derman-Lieberman-Ross theorem. The experimental results indicate a significant potential of this approach to generate robust resource allocations in unstable distributed systems.Item Open Access Differential gene expression in Escherichia coli following exposure to non-thermal atmospheric-pressure plasma(Colorado State University. Libraries, 2008) Sharma, Ashish, author; Collins, George, advisor; Pruden, Amy, advisorPlasma decontamination provides a low temperature and non-toxic means of treating objects where heating and exposure to poisonous compounds is not acceptable especially in applications relating to medical devices and food packaging. The effects of various plasma constituents (UV photons, reactive species, charged particles etc.) acting independently and/or synergistically on bacteria at the biomolecular level is not well understood. High-density oligonucleotide microarrays were used to explore the differential gene expression of the entire genome of E. coli following plasma treatment. The results indicate a significant induction of genes involved in DNA repair and recombination suggesting that plasma exposure caused substantial DNA damage in the cell. There was also evidence of oxidative stress and suppression of genes involved in housekeeping functions of energy metabolism and ion transport. Experiments were also carried out to optimize plasma operating parameters to achieve a higher rate of inactivation of microbes. Overall, the results of this study will help to further optimize non-thermal plasma applications for bacterial inactivation.Item Open Access CMOS-compatible on-chip optical interconnects(Colorado State University. Libraries, 2009) Pownall, Robert Elliott, author; Lear, Kevin L., advisorThe increase in complexity of integrated circuits (ICs) over the past five decades has resulted increasing demands on the interconnect layers. In the past decade, the ability of conventional "electrical signal down a metal wire" interconnect to keep up with the increasing demands placed on interconnect has come more and more into question. To meet the increasing demands on interconnect and to get around the limitation of conventional "metal wire" interconnect, various forms of optical interconnect have been proposed.Item Open Access Three-dimensional water vapor retrieval using a network of scanning compact microwave radiometers(Colorado State University. Libraries, 2009) Padmanabhan, Sharmila, author; Reising, S. C., advisorQuantitative precipitation forecasting is currently limited by the paucity of observations on sufficiently fine temporal and spatial scales. In particular, convective storms have been observed to develop in regions of strong and rapidly evolving moisture gradients that vary spatially on sub-meso γ scales (2-5 km). Therefore, measurements of water vapor aloft with high time resolution and sufficient spatial resolution have the potential to improve forecast skill for the initiation of convective storms. Such measurements may be used for assimilation into and validation of numerical weather prediction (NWP) models. Currently, water vapor density profiles are obtained using in-situ sensors on radiosondes and remotely using lidars, GPS ground-based networks, CPS radio occultation from satellites and a relatively small number of space-borne microwave and infrared radiometers. In-situ radiosonde measurements have excellent vertical resolution but are severely limited in temporal and spatial coverage. In addition, each radiosonde takes 45-60 minutes to rise from ground level to the tropopause, and is typically advected by upper-level winds up to tens of km horizontal displacement from its launch site. Tomographic inversion applied to ground-based measurements of GPS wet delay is expected to yield data with 0.5-1 km vertical resolution at 30-minute intervals. COSMIC and CHAMP satellites in low earth orbit (LEO) provide measurements with 0.1-0.5 km vertical resolution at 30-minute intervals but only 200-600 km horizontal resolution, depending on the magnitude of the path-integrated refractivity. Microwave radiometers in low-earth orbit provide reasonable vertical resolution (2 km) and mesoscale horizontal resolution (20 km) with long repeat times. Both the prediction of convective initiation and quantitative precipitation require knowledge of water vapor variations on sub-meso γ scales (2-5 km) with update times on the order of a few tens of minutes. Due to the relatively high cost of both commercially-available microwave radiometers for network deployment and rapid radiosonde launches with close horizontal spacing, such measurements have not been available. Measurements using a network of multi-frequency microwave radiometers can provide information to retrieve the 3-D distribution of water vapor in the troposphere. An Observation System Simulation Experiment (OSSE) was performed in which synthetic examples of retrievals using a network of radiometers were compared with results from the Weather Research Forecasting (WRF) model at a grid scale of 500 m. These comparisons show that the 3-D water vapor field can be retrieved with an accuracy varying from 15-40% depending on the number of sensors in the network and the location and time of the a priori. To deploy a network of low cost radiometers, the Compact Microwave Radiometer for Humidity profiling (CMR-H) was developed by the Microwave Systems Laboratory at Colorado State University. Using monolithic microwave integrated circuit technology and unique packaging yields a radiometer that is small (24 x 18 x 16 cm), light weight (6 kg), relatively inexpensive and low-power consumption (25-50 W, depending on weather conditions). Recently, field measurements at the DOE Atmospheric Radiation Measurement (ARM) Southern Great Plains site in Oklahoma have demonstrated the potential for coordinated, scanning microwave radiometers to provide 0.5-1 km resolution both vertically and horizontally with sampling times of 15 minutes or less. This work describes and demonstrates the use of algebraic reconstruction tomography to retrieve the 3-D water vapor field from simultaneous brightness temperatures using radiative transfer theory, optimal estimation and Kalman filtering.Item Open Access The study and real-time implementation of attenuation correction for X-band dual-polarization weather radars(Colorado State University. Libraries, 2008) Liu, Yuxiang, author; Bringi, V. N., advisor; Chandrasekar, V., advisorAttenuation of electromagnetic radiation due to rain or other wet hydrometeors along the propagation path has been studied extensively in the radar meteorology community. Recently, use of short range dual-polarization X-band radar systems has gained momentum due to lower system cost compared with the much more expensive S-band systems. Advances in dual-polarization radar research have shown that the specific attenuation and differential attenuation between horizontal and vertical polarized waves caused by oblate, highly oriented raindrops can be estimated using the specific differential phase. This advance leads to correction of the measured reflectivity (Zh) and the differential reflectivity (Zdr) due to path attenuation. This thesis addresses via theory, simulations and data analyses the accuracy and optimal estimation of attenuation-correction procedures at X-band frequency. Real-time implementation of the correction algorithm was developed for the first generation of X-band dual-polarized Doppler radar network (Integration Project 1, IP1) operated by the NSF Center for Collaborate Adaptive Sensing of the Atmosphere (CASA). We evaluate the algorithm for correcting the Zh, and the Zdr for rain attenuation using simulations and X-band radar data under ideal and noisy situations. Our algorithm is able to adjust the parameters according to the changes in temperature, drop shapes, and a certain class of drop size distributions (DSD) with very fast convergence. The X-band radar data were obtained from the National Institute of Earth Science and Disaster Prevention (NIED), Japan, and from CASA IP1. The algorithm accurately corrects NIED's data when compared with ground truth calculated from in situ disdrometer-based DSD measurements for a Typhoon event. We have implemented, in real-time, the algorithm in all the CASA IP1 radar nodes. We also evaluate our preliminary method that separately estimates rain and wet ice attenuation using microphysical outputs from a previous supercell simulation using the CSU-RAMS (Regional Atmospheric Modeling System). The retrieved rain and wet ice specific attenuation fields were found to be in close correspondence to the 'true' fields calculated from the simulation. The concept to correct rain and wet ice attenuation separately can be also applied to the CASA IP1 network with additional constraint information possibly provided by the WSR-88D network.Item Open Access Application-aware in-network service and data fusion frameworks for distributed adaptive sensing systems(Colorado State University. Libraries, 2009) Lee, Pan Ho, author; Jayasumana, Anura P., advisorDistributed Collaborative Adaptive Sensing (DCAS) systems are emerging for applications, such as detection and prediction of hazardous weather using a network of radars. Collaborative Adaptive Sensing of the Atmosphere (CASA) is an example of these emerging DCAS systems. CASA is based on a dense network of weather radars that operate collaboratively to detect tornadoes and other hazardous atmospheric conditions. This dissertation presents an application-aware data transport framework to meet the data distribution/processing requirements of such mission-critical sensor applications over best-effort networks. Our application-aware data transport framework consists of overlay architecture and a programming interface. The architecture enables deploying application-aware in-network services in an overlay network to allow applications to best adapt to the network conditions. The programming interface facilitates development of applications within the architectural framework. We demonstrate the efficacy of the proposed framework by considering a DCAS application. We evaluate the proposed schemes in a network emulation environment and on Planetlab, a world-wide Internet test-bed. The proposed schemes are very effective in delivering high quality data to the multiple end users under various network conditions. This dissertation also presents the design and implementation of an architectural framework for timely and accurate processing of radar data fusion algorithms. The preliminary version of the framework is used for real-time implementation of a multi-radar data fusion algorithm, the CASA network-based reflectivity retrieval algorithm. As a part of this research, a peer-to-peer (P2P) collaboration framework for multi-sensor data fusion is presented. Simulation-based results illustrate the effectiveness of the proposed P2P framework. As multi-sensor fusion applications have a stringent real-time constraint, estimation of network delay across the sensor networks is important, particularly as they affect the quality of sensor fusion applications. We develop an analytical model for multi-sensor data fusion latency for the Internet-based sensor applications. Time scale-invariant burstiness observed across the network produces excessive network latencies. The analytical model considers the network delay due to the self-similar cross-traffic and latency for data synchronization for data fusion. A comparison of the analytical model and simulation-based results show that our model provides a good estimation for the multi-sensor data fusion latency.