Theses and Dissertations
Permanent URI for this collection
Browse
Browsing Theses and Dissertations by Title
Now showing 1 - 20 of 321
Results Per Page
Sort Options
Item Embargo 3D localization of cytoskeleton in mouse spermatids using stochastic optical reconstruction microscopy(Colorado State University. Libraries, 2022) Sunny, Reshma, author; Krapf, Diego, advisor; Nikdast, Mahdi, committee member; Prasad, Ashok, committee memberIt is estimated by the World Health Organization that globally 186 million individuals live with infertility. Studies have shown that cause of male infertility is unknown in 30 to 50% of the cases. Over the last several years teratozoospermias have been investigated and have been backtracked to events in spermatogenesis. The development of the acrosome and the manchette, protein and vesicle transport in spermatids, and sperm head shaping are crucial steps in the formation of healthy sperms. The cytoskeleton in spermatids plays a crucial role in shaping the sperm head. The acroplaxome exerts forces on the nucleus and gives the mammalian sperm head its species-specific shape, and also facilitates the proper attachment of the nuclear cap called the acrosome, containing the enzymes required for sperm penetration of the oocyte. The manchette should be intact and formed properly to have shortened diameter as spermatids differentiate so that it can constrict the base of the nucleus to shape the head, and also facilitate the transport of cargo to the base of the cell. Thus as studies have confirmed, the disruption in the organization of the cytoskeleton is a concern for infertility. Hence it is crucial to learn more about the cytoskeletal structures in spermatids. The goal of this thesis is to 3D localize these structures. The major structures we are interested in are the acroplaxome and the manchette. For this, we use a super-resolution microscopy method called Stochastic Optical Reconstruction Microscopy to image spermatid cytoskeleton. Our experiments confirmed the presence of α-tubulin in the manchette and that of F-actin in the manchette and the acroplaxome, as previously observed by researchers with 2D confocal images. We observed that the manchette reduces in diameter and progresses to the caudal portion of the cell at the later steps of differentiation and that the structure forms completely at step 10 and disassembles after step 14.Item Open Access A biosensor system with an integrated CMOS microelectrode array for high spatio-temporal electrochemical imaging(Colorado State University. Libraries, 2019) Tedjo, William, author; Chen, Thomas, advisor; Tobet, Stuart, committee member; Collins, George, committee member; Wilson, Jesse, committee memberThe ability to view biological events in real time has contributed significantly to research in life sciences. While optical microscopy is important to observe anatomical and morphological changes, it is equally important to capture real-time two-dimensional (2D) chemical activities that drive the bio-sample behaviors. The existing chemical sensing methods (i.e. optical photoluminescence, magnetic resonance, and scanning electrochemical), are well-established and optimized for existing ex vivo or in vitro analyses. However, such methods also present various limitations in resolution, real-time performance, and costs. Electrochemical method has been advantageous to life sciences by supporting studies and discoveries in neurotransmitter signaling and metabolic activities in biological samples. In the meantime, the integration of Microelectrode Array (MEA) and Complementary-Metal-Oxide-Semiconductor (CMOS) technology to the electrochemical method provides biosensing capabilities with high spatial and temporal resolutions. This work discusses three related subtopics in this specific order: improvements to an electrochemical imaging system with 8,192 sensing points for neurotransmitter sensing; comprehensive design processes of an electrochemical imaging system with 16,064 sensing points based on the previous system; and the application of the system for imaging oxygen concentration gradients in metabolizing bovine oocytes. The first attempt of high spatial electrochemical imaging was based on an integrated CMOS microchip with 8,192 configurable Pt surface electrodes, on-chip potentiostat, on-chip control logic, and a microfluidic device designed to support ex vivo tissue experimentation. Using norepinephrine as a target analyte for proof of concept, the system is capable of differentiating concentrations of norepinephrine as low as 8µM and up to 1,024 µM with a linear response and a spatial resolution of 25.5×30.4μm. Electrochemical imaging was performed using murine adrenal tissue as a biological model and successfully showed caffeine-stimulated release of catecholamines from live slices of adrenal tissue with desired spatial and temporal resolutions. This system demonstrates the capability of an electrochemical imaging system capable of capturing changes in chemical gradients in live tissue slices. An enhanced system was designed and implemented in a CMOS microchip based on the previous generation. The enhanced CMOS microchip has an expanded sensing area of 3.6×3.6mm containing 16,064 Pt electrodes and the associated 16,064 integrated read channels. The novel three-electrode electrochemical sensor system designed at 27.5×27.5µm pitch enables spatially dense cellular level chemical gradient imaging. The noise level of the on-chip read channels allow amperometric linear detection of neurotransmitter (norepinephrine) concentrations from 4µM to 512µM with 4.7pA/µM sensitivity (R=0.98). Electrochemical response to dissolved oxygen concentration or oxygen partial pressure (pO2) was also characterized with deoxygenated deionized water containing 10µM to 165 µM pO2 with 8.21pA/µM sensitivity (R=0.89). The enhanced biosensor system also demonstrates selectivity to different target analytes using cyclic voltammetry to simultaneously detect NE and uric acid. In addition, a custom-designed indium tin oxide and Au glass electrode is integrated into the microfluidic support system to enable pH measurement, ensuring viability of bio-samples in ex vivo experiments. Electrochemical images confirm the spatiotemporal performance at four frames per second while maintaining the sensitivity to target analytes. The overall system is controlled and continuously monitored by a custom-designed user interface, which is optimized for real-time high spatiotemporal resolution chemical bioimaging. It is well known that physiological events related to oxygen concentration gradients provide valuable information to determine the state of metabolizing biological cells. Utilizing the CMOS microchip with 16,064 Pt MEA and an improved three-electrode system configuration, the system is capable of imaging low oxygen concentration with limit of detection of 18.3µM, 0.58mg/L, or 13.8mmHg. A modified microfluidic support system allows convenient bio-sample handling and delivery to the MEA surface for sensing. In vitro oxygen imaging experiments were performed using bovine cumulus-oocytes-complexes cells with custom software algorithms to analyze its flux density and oxygen consumption rate. The imaging results are processed and presented as 2D heatmaps, representing the dissolved oxygen concentration in the immediate proximity of the cell. The 2D images and analysis of oxygen consumption provide a unique insight into the spatial and temporal dynamics of cell metabolism.Item Open Access A CMOS compatible optical biosensing system based on local evanescent field shift mechanism(Colorado State University. Libraries, 2011) Yan, Rongjin, author; Lear, Kevin L., advisor; Dandy, David S., committee member; Chandrasekar, V., committee member; Notaros, Branislav, committee memberThe need for label-free integrated optical biosensors has dramatically increased in recent years. Integrated optical biosensors have many advantages, including low-cost, and portability. They can be applied to many fields, including clinical diagnostics, food safety, environmental monitoring, and biosecurity applications. One of the most important applications is point-of-care diagnosis, which means the disease could be tested at or near the site of patient care rather than in a laboratory. We are exploring the issues of design, modeling and measurement of a novel chip-scale local evanescent array coupled (LEAC) biosensor, which is an ideal platform for point-of-care diagnosis. Until now, three generations of LEAC samples have been designed, fabricated and tested. The 1st generation of LEAC sensor without a buried detector array was characterized using a commercial near field scanning optical microscope (NSOM). The sample was polished and was end-fire light coupled using single mode fiber. The field shift mechanism in this proof-to-concept configuration without buried detector arrays has been validated with inorganic adlayers [1], photoresist [2] and different concentrations of CRP proteins [3]. Mode beating phenomena was predicted by the beam propagation method (BPM) and was observed in the NSOM measurement. A 2nd generation LEAC sensor with a buried detector array was fabricated using 0.35μm CMOS process at the Avogo Technologies Inc., Fort Collins, Colorado. Characterizations with both single layer patternings, including photoresist as well as BSA [4] and immunoassay complexes [5] were done with cooperative efforts from various research groups. The BPM method was used to study the LEAC sensor, and the simulation results demonstrated the sensitivity of the LEAC sensor is 16%/nm, which was proved to match well with the experimental data [6]. Different antigen/antibodies, including mouse IgG and Hspx (a tuberculosis reactive antigen), have been used to test the immunoassay ability of LEAC sensor [7]. Many useful data have been collected by using the 2nd generation LEAC chip. However, during the characterization of the Avago chips, some design problems were revealed, including incompatibility with microfluidic integration, restricted detection region, strong sidewall scattering and uncoupled light interference from the single mode fiber. To address these problems, the 3rd generation LEAC sensor chip with buried detector arrays was designed to allow real-time monitoring and compatibility with microfluidic channel integration. 3rd generation samples have been fabricated in the CSU cleanroom and the mesa detector structure has been replaced with the thin insulator detector structure to solve the problems encountered during the characterizations. PDMS microfluidic channels and a multichannel measurement system consisting of a probe card, a multiplexing/amplification circuit and a LabVIEW program have been implemented into the LEAC system. In recent years, outbreaks of fast spreading viral diseases, such as bird flu and H1N1, have drawn a lot of concern of the point-of-care virus detection techniques. To test the virus detection ability of LEAC sensor, 40nm and 200nm polystyrene nanoparticles were immobilized onto the waveguide, and the increased scattered light was collected. Sensitivities of 1%/particle and 0.04%/particle were observed for 200nm and 40nm particles respectively.Item Open Access A data-driven approach for maximizing available wind energy through a dedicated pricing mechanism for charging residential plug-in electric vehicles(Colorado State University. Libraries, 2019) Eldali, Fathalla, author; Suryanarayanan, Siddharth, advisor; Collins, George J., committee member; Zimmerle, Dan, committee member; Abdel-Ghany, Salah, committee memberWind energy generation is growing significantly because of its favorable attributes such as cost-effectiveness and environment-friendliness. Electricity is the most perishable commodity as it must be consumed almost instantaneously as it is produced. Because of that, the variable nature of wind power generation and the challenges in forecasting the output power of wind impose problems of curtailment (excess of available wind energy than forecast) and deployment of reserves (deficit of available wind energy than forecast). Energy storage for wind power installations is a potential solution; however, storing large amounts of energy over long time periods is an expensive and inefficient solution. Plug-in electric vehicles (PEVs) are recognized as one of the assets to integrate energy storage on the distribution side of the electricity grid. Thus, PEVs charging presents an alternative solution for managing this excess energy in wind energy-rich grids. An accurate wind power forecasting (WPF) in the day-ahead market leads to a more predictable dispatch and unit-commitment (UC) of generators, thus reducing the need for reserves and storage. Typically, reserves to match the imbalance in supply and demand of electricity are provided by generators that are more expensive than the ones engaged in primary services. Markets in different regions of the world have specific designs, operation policies, and regulations when it comes to variable sources (e.g., wind, and solar). Independent system operators (ISOs), tasked with handling electricity markets in the US, must meet regulating reserve as directed by the North America Electric Reliability Council (NERC). One of these requirements is that the sufficient reserve must be available to cover the generation deficit. This deficit can be due to under-forecasting. There is also a case when ISOs need to curtail wind energy generation because of over-forecasting. In the first part of this dissertation, wind power data from the Electric Reliability Council of Texas (ERCOT) market is used to improve WPF as Texas has the highest installed wind energy capacity in the North American electricity grid. Autoregressive integrated moving average (ARIMA) model is used for WPF improvement. There is also a need to develop a coherent metric to quantify the improvements to WPF because different studies use different metrics. Also, using the statistical representation of the reduction in error does not necessarily reflect the overall benefit, especially the economic benefit, for ISOs. In the second part of this dissertation work, modifications of on risk-adjusted metrics used in investments assessments are developed and applied to the operation cost (OC). OC is the result of running the economic dispatch (ED) on realistic synthetic models of the actual Texas grid to evaluate the impact of the WPF improvement on the cost of operation. The modifications of the above-mentioned risk-adjusted metrics are also applied to deferring the capital investment on the distribution systems. Then, the metrics are used to assess the combination of photovoltaic (PV) and battery energy storage system (BESS) at the residential section of the distribution grid as explained in appendix A. The third part of this dissertation uses a data-driven approach to investigate existing pricing mechanisms for a Texan city (i.e., Austin) located in a wind energy-rich grid such as ERCOT with an increased adoption rate of PEVs. The study performed indicates the need for an alternative dynamic pricing mechanism dedicated to PEVs than the existing choices for maximizing the utility of available energy from wind in the absence of grid-level energy storage. Dynamic pricing produces an opportunity to avoid high costs for the power provider and benefits the consumers if they respond to the change of the price. However, achieving these benefits needs smart rate design and real data. After justifying the need for fair pricing mechanisms to benefit the utility and the customers for the coordination of wind energy and PEVs charging in wind energy-rich grid, this dissertation designs a time-varying pricing mechanism. This dissertation employs a data decomposition technique to design a dedicated pricing mechanism for PEVs. We use real data of a city with high projections of PEVs (Austin, Texas) located in a wind-rich electricity grid (ERCOT) to demonstrate this design of a dynamic pricing method.Item Open Access A distributed network of autonomous environmental monitoring systems(Colorado State University. Libraries, 2018) Kinhal, Kiran Krishnamurthy, author; Azimi-Sadjadi, Mahmood R., advisor; Wilson, Jesse, committee member; Ghosh, Sudipto, committee memberAcoustic wireless sensor networks have found applications in various areas including monitoring, assisted living, home automation, security and situational awareness. The process of acoustic detection and classification usually demands significant human involvement in the form of visual and audio examination of the collected data. The accuracy of the detection and classification outcome through this process is often limited by inevitable human errors. In order to overcome this limitation and to automate this process, we present a new fully decentralized decision-making platform referred to as Environmental Monitoring Station (EMS) for sensor-level detection and classification of acoustic airborne sources in national parks. The EMS automatically reports this information to a park station through two wireless communication systems. More specifically, in this thesis, we focus on the implementation of the communication systems on the EMS, and also on the design of 1/3rd octave filter bank that is used for onboard spectral sub-band feature generation. A 1/3rd octave filter bank was implemented on the ARTIX-7 FPGA as a custom hardware unit and was interfaced with the detection and classification algorithm on the MicroBlaze softcore processor. The detection results are stored in an SD card and the source counts are tracked in the MicroBlaze firmware. The EMS board is equipped with two expansion slots for incorporating the XBee as well as GSM communication systems. The XBee modules help to build a self-forming mesh network of EMS nodes and makes it easy to add or remove nodes into the network. The GSM module is used as a gateway to send data to the web server. The EMS system is capable of performing detection, classification, and reporting of the source events in near real-time. A field test was recently conducted in the Lake Mead National Recreation Area by deploying a previously trained system as a slave node and a gateway as a master node to demonstrate and evaluate the detection and classification and the networking abilities of the developed system. It was found that the trained EMS system was able to adequately detect and classify the sources of interest and communicate the results through a gateway to the park station successfully. At the time of writing this document, only two fully functional EMS boards were built. Thus, it was not possible to physically build a mesh network of several EMS systems. Thus, future research should focus on accomplishing this task. During the field test, it was not possible to achieve a high transmission range for XBee, due to RF interference present in the deployment area. An effort needs to be made to achieve a higher transmission range for XBees by using a high gain antenna and keeping the antenna in line-of-sight as much as possible. Due to inadequate training data, the EMS system frequently misclassified the sources and mis-detected interference as sources. Thus, it is necessary to train the detection and classification algorithm by using a larger and more representative data set with considerable variability to make it more robust and less prone to variability in deployment location.Item Open Access A hierarchical framework for energy-efficient resource management in green data centers(Colorado State University. Libraries, 2015) Jonardi, Eric, author; Pasricha, Sudeep, advisor; Siegel, H. J., advisor; Howe, Adele, committee memberData centers and high performance computing systems are increasing in both size and number. The massive electricity consumption of these systems results in huge electricity costs, a trend that will become commercially unsustainable as systems grow even larger. Optimizations to improve energy-efficiency and reduce electricity costs can be implemented at multiple system levels, and are explored in this thesis at the server node, data center, and geo-distributed data center levels. Frameworks are proposed for each level to improve energy-efficiency and reduce electricity costs. As the core count in processors continues to rise, applications are increasingly experiencing performance degradation due to co-location interference arising from contention for shared resources. The first part of this thesis proposes a methodology for modeling these co-location interference effects to enable accurate predictions of execution time for co-located applications, reducing or even eliminating the need to over-provision server resources to meet quality of service requirements, and improving overall system efficiency. In the second part of this thesis a thermal-, power-, and machine-heterogeneity-aware resource allocation framework is proposed for a single data center to reduce both total server power and the power required to cool the data center, while maximizing the reward of the executed workload in over-subscribed scenarios. The final part of this thesis explores the optimization of geo-distributed data centers, which are growing in number with the rise of cloud computing. A geographical load balancing framework with time-of-use pricing and integrated renewable power is designed, and it is demonstrated how increasing the detail of system knowledge and considering all system levels simultaneously can significantly improve electricity cost savings for geo-distributed systems.Item Embargo A microphysiological system for studying barrier health of live tissues in real time(Colorado State University. Libraries, 2024) Way, Ryan, author; Chen, Thomas W., advisor; Wilson, Jesse, committee member; Chicco, Adam, committee memberEpithelial cells create barriers that protect many different components in the body from their external environment. The gut in particular carries bacteria and other infectious agents. A healthy gut epithelial barrier prevents unwanted substances from accessing the underlying lamina propria while maintaining the ability to digest and absorb nutrients. Increased gut barrier permeability, better known as leaky gut, has been linked to several chronic inflammatory diseases. Yet understanding the cause of leaky gut and developing effective interventions are still elusive due to the lack of tools to maintain tissue's physiological environment while elucidating cellular functions under various stimuli ex vivo. This thesis presents a microphysiological system capable of recording real-time barrier permeability of mouse gut tissues in a realistic physiological environment over extended durations. Key components of the microphysiological system include a microfluidic chamber designed to hold the live tissue explant and create a sufficient microphysiological environment to maintain tissue viability; proper media composition that preserves a microbiome and creates necessary oxygen gradients across the barrier; integrated sensor electrodes and supporting electronics for acquiring and calculating transepithelial electrical resistance (TEER); and a scalable system architecture to allow multiple chambers running in parallel for increased throughput. The experimental results demonstrate that the system can maintain tissue viability for up to 72 hours. The results also show that the custom-built and integrated TEER sensors are sufficiently sensitive to distinguish differing levels of barrier permeability when treated with collagenase and low pH media compared to control. Permeability variations in tissue explants from different positions in the intestinal tract were also investigated using TEER revealing their disparities in permeability. Finally, the results also quantitatively determine the effect of the muscle layer on total epithelial resistance.Item Open Access A multi-task learning method using gradient descent with applications(Colorado State University. Libraries, 2021) Larson, Nathan Dean, author; Azimi-Sadjadi, Mahmood R., advisor; Pezeshki, Ali, committee member; Oprea, Iuliana, committee memberThere is a critical need to develop classification methods that can robustly and accurately classify different objects in varying environments. Each environment in a classification problem can contain its own unique challenges which prevent traditional classifiers from performing well. To solve classification problems in different environments, multi-task learning (MTL) models have been applied that define each environment as a separate task. We discuss two existing MTL algorithms and explain how they are inefficient for situations involving high-dimensional data. A gradient descent-based MTL algorithm is proposed which allows for high-dimensional data while providing accurate classification results. Additionally, we introduce a kernelized MTL algorithm which may allow us to generate nonlinear classifiers. We compared our proposed MTL method with an existing method, Efficient Lifelong Learning Algorithm (ELLA), by using them to train classifiers on the underwater unexploded ordnance (UXO) and extended modified National Institute of Standards and Technology (EMNIST) datasets. The UXO dataset contained acoustic color features of low-frequency sonar data. Both real data collected from physical experiments as well as synthetic data were used forming separate environments. The EMNIST digits dataset contains grayscale images of handwritten digits. We used this dataset to show how our proposed MTL algorithm performs when used with more tasks than are in the UXO dataset. Our classification experiments showed that our gradient descent-based algorithm resulted in improved performance over those of the traditional methods. The UXO dataset had a small improvement while the EMNIST dataset had a much larger improvement when using our MTL algorithm compared to ELLA and the single task learning method.Item Open Access A new algorithm for retrieval of tropospheric wet path delay over inland water bodies and coastal zones using brightness temperature deflection ratios(Colorado State University. Libraries, 2013) Gilliam, Kyle L., author; Reising, Steven C., advisor; Notaros, Branislav, committee member; Kummerow, Christian, committee memberAs part of former and current sea-surface altimetry missions, brightness temperatures measured by nadir-viewing 18-34 GHz microwave radiometers are used to determine apparent path delay due to variations in index of refraction caused by changes in the humidity of the troposphere. This tropospheric wet-path delay can be retrieved from these measurements with sufficient accuracy over open oceans. However, in coastal zones and over inland water the highly variable radiometric emission from land surfaces at microwave frequencies has prevented accurate retrieval of wet-path delay using conventional algorithms. To extend wet path delay corrections into the coastal zone (within 25 km of land) and to inland water bodies, a new method is proposed to correct for tropospheric wet-path delay by using higher-frequency radiometer channels from approximately 50-170 GHz to provide sufficiently small fields of view on the surface. A new approach is introduced based on the variability of observations in several millimeter-wave radiometer channels on small spatial scales due to surface emissivity in contrast to the larger-scale variability in atmospheric absorption. The new technique is based on the measurement of deflection ratios among several radiometric bands to estimate the transmissivity of the atmosphere due to water vapor. To this end, the Brightness Temperature Deflection Ratio (BTDR) method is developed starting from a radiative transfer model for a downward-looking microwave radiometer, and is extended to pairs of frequency channels to retrieve the wet path delay. Then a mapping between the wet transmissivity and wet-path delay is performed using atmospheric absorption models. A frequency selection study is presented to determine the suitability of frequency sets for accurate retrieval of tropospheric wet-path delay, and comparisons are made to frequency sets based on currently-available microwave radiometers. Statistical noise analysis results are presented for a number of frequency sets. Additionally, this thesis demonstrates a method of identifying contrasting surface pixels using edge detection algorithms to identify contrasting scenes in brightness temperature images for retrieval with the BTDR method. Finally, retrievals are demonstrated from brightness temperatures measured by Special Sensor Microwave Imager/Sounder (SSMIS) instruments on three satellites for coastal and inland water scenes. For validation, these retrievals are qualitatively compared to independently-derived total precipitable water products from SSMIS, the Tropical Rainfall Measurement Mission (TRMM) Microwave Imager (TMI) and the Advanced Microwave Sounding Radiometer for Earth Observing System (EOS) (AMSR-E). Finally, a quantitative method for analyzing the data consistency of the retrieval is presented as an estimate of the error in the retrieved wet path delay. From these comparisons, one can see that the BTDR method shows promise for retrieving wet path delays over inland water and coastal regions. Finally, several additional future uses for the algorithm are described.Item Open Access A plastic total internal reflection-based photoluminescence device for enzymatic biosensors(Colorado State University. Libraries, 2013) Thakkar, Ishan G., author; Lear, Kevin L., advisor; Reardon, Kenneth, committee member; Collins, George, committee memberGrowing concerns for quality of water, food and beverages in developing and developed countries drive sizeable markets for mass-producible, low cost devices that can measure the concentration of contaminant chemicals in water, food, and beverages rapidly and accurately. Several fiber-optic enzymatic biosensors have been reported for these applications, but they exhibit very strong presence of scattered excitation light in the signal for sensing, requiring expensive thin-film filters, and their non-planar structure makes them challenging to mass-produce. Several other planar optical waveguide-based biosensors prove to be relatively costly and more fragile due to constituent materials and the techniques involved in their fabrication. So, a plastic total internal reflection (TIR)-based low cost, low scatter, field-portable device for enzymatic biosensors is fabricated and demonstrated. The design concept of the TIR-based photoluminescent enzymatic biosensor device is explained. An analysis of economical materials with appropriate optical and chemical properties is presented. PMMA and PDMS are found to be appropriate due to their high chemical resistance, low cost, high optical transmittance and low auto-fluorescence. The techniques and procedures used for device fabrication are discussed. The device incorporated a PMMA-based optical waveguide core and PDMS-based fluid cell with simple multi-mode fiber-optics using cost-effective fabrication techniques like molding and surface modification. Several techniques of robustly depositing photoluminescent dyes on PMMA core surface are discussed. A pH-sensitive fluorescent dye, fluoresceinamine, and an O2-sensitive phosphorescent dye, Ru(dpp) both are successfully deposited using Si-adhesive gel-based as well as HydroThane-based deposition methods. Two different types of pH-sensors using two different techniques of depositing fluoresceinamine are demonstrated. Also, the effect of concentration of fluoresceinamine-dye molecules on fluorescence intensity and scattered excitation light intensity is investigated. The fluorescence intensity to the scattered excitation light intensity ratio for dye deposition is found to increase with increase in concentration. However, both the absolute fluorescence intensity and absolute scatter intensity are found to decrease in different amounts with an increase in concentration. An enzymatic hydrogen peroxide (H2O2) sensor is made and demonstrated by depositing Ruthenium-based phosphorescent dye (Ru(dpp)3) and catalase-enzyme on the surface of the waveguide core. The O2-sensitive phosphorescence of Ru(dpp)3 is used as a transduction signal and the catalase-enzyme is used as a bio-component for sensing. The H2O2 sensor exhibits a phosphorescence signal to scattered excitation light ratio of 100±18 without filtering. The unfiltered device demonstrates a detection limit of (2.20±0.6) µM with the linear range from 200µM to 20mM. An enzymatic lactose sensor is designed and characterized using Si-adhesive gel based Ru(dpp)3 deposition and oxidase enzyme. The lactose sensor exhibits the linear range of up to 0.8mM, which is too small for its application in industrial process control. So, a flow cell-based sensor device with a fluid reservoir is proposed and fabricated to increase the linear range of the sensor. Also, a multi-channel pH-sensor device with four channels is designed and fabricated for simultaneous sensing of multiple analytes.Item Open Access A real time video pipeline for computer vision using embedded GPUs(Colorado State University. Libraries, 2016) Patil, Rutuja, author; Beveridge, Ross, advisor; Olschanowsky, Catherine, advisor; Azimi Sadjadi, Mahmood, committee member; Guzik, Stephen, committee memberThis thesis presents case study confirming the feasibility of real time Computer Vision applications on embedded GPUs. Applications that depend on video processing, such as security surveillance, can benefit from applying optimizations common in scientific computing. This thesis demonstrates the benefit of applying such optimizations to real time Computer Vision applications on embedded GPUs. The primary contribution of this thesis is an optimized implementation of ViBe targeting NVIDIA's Jetson TK1. ViBe is a commonly used background subtraction algorithm. Optimizing a background subtraction algorithm accelerates the task of reducing the field of view to only interesting patches of the frames of the video. Placing portable hardware close to capturing devices in the surveillance system reduces bandwidth requirements and cost. The goals of the optimizations proposed for this algorithm are to 1) reduce memory traffic 2) overlap CPU and GPU usage 3) reduce kernel overhead. The optimized implementation of ViBe achieves a frame rate of almost 55 FPS beating the real time goal standard of 30 FPS for real time video. This is a small portion of the real-time window leaving processing time for additional algorithms like object recognition.Item Open Access A recursive least squares training approach for convolutional neural networks(Colorado State University. Libraries, 2022) Yang, Yifan, author; Azimi-Sadjadi, Mahmood, advisor; Pezeshki, Ali, committee member; Oprea, Iuliana, committee memberThis thesis aims to come up with a fast method to train convolutional neural networks (CNNs) using the application of the recursive least squares (RLS) algorithm in conjunction with the back-propagation learning. In the training phase, the mean squared error (MSE) between the actual and desired outputs is iteratively minimized. The recursive updating equations for CNNs are derived via the back-propagation method and normal equations. This method does not need the choice of a learning rate and hence does not suffer from speed-accuracy trade-off. Additionally, it is much faster than the conventional gradient-based methods in a sense that it needs less epochs to converge. The learning curves of the proposed method together with those of the standard gradient-based methods using the same CNN structure are generated and compared on the MNIST handwritten digits and Fashion-MNIST clothes databases. The simulation results show that the proposed RLS-based training method requires only one epoch to meet the error goal during the training phase while offering comparable accuracy on the testing data sets.Item Open Access A semi-dynamic resource management framework for multicore embedded systems with energy harvesting(Colorado State University. Libraries, 2015) Xiang, Yi, author; Pasricha, Sudeep, advisor; Jayasumana, Anura, committee member; Siegel, H. J., committee member; Strout, Michelle Mills, committee memberSemiconductor technology has been evolving rapidly over the past several decades, introducing a new breed of embedded systems that are tiny, efficient, and pervasive. These embedded systems are the backbone of the ubiquitous and pervasive computing revolution, embedded intelligence all around us. Often, such embedded intelligence for pervasive computing must be deployed at remote locations, for purposes of environment sensing, data processing, information transmission, etc. Compared to current mobile devices, which are mostly supported by rechargeable and exchangeable batteries, emerging embedded systems for pervasive computing favor a self-sustainable energy supply, as their remote and mass deployment makes it impractical to change or charge their batteries. The ability to sustain systems by scavenging energy from ambient sources is called energy harvesting, which is gaining monument for its potential to enable energy autonomy in the era of pervasive computing. Among various energy harvesting techniques, solar energy harvesting has attracted the most attention due to its high power density and availability. Another impact of semiconductor technology scaling into the deep submicron level is the shifting of design focus from performance to energy efficiency as power dissipation on a chip cannot increase indefinitely. Due to unacceptable power consumption at high clock rate, it is desirable for computing systems to distribute workload on multiple cores with reduced execution frequencies so that overall system energy efficiency improves while meeting performance goals. Thus it is necessary to adopt the design paradigm of multiprocessing for low-power embedded systems due to the ever-increasing demands for application performance and stringent limitations on power dissipation. In this dissertation we focus on the problem of resource management for multicore embedded systems powered by solar energy harvesting. We have conducted a substantial amount of research on this topic, which has led to the design of a semi-dynamic resource management framework designed with emphasis on efficiency and flexibility that can be applied to energy harvesting-powered systems with a variety of functionality, performance, energy, and reliability goals. The capability and flexibility of the proposed semi-dynamic framework are verified by issues we have addressed with it, including: (i) minimizing miss rate/miss penalty of systems with energy harvesting, (ii) run-time thermal control, (iii) coping with process variation induced core-to-core heterogeneity, (iv) management of hybrid energy storage, (v) scheduling of task graphs with inter-node dependencies, (vi) addressing soft errors during execution, (vii) mitigating aging effects across the chip over time, and (vii) supporting mixed-criticality scheduling on heterogeneous processors.Item Open Access A study of the influence of process parameter variations on the material properties and laser damage performance of ion beam sputtered Sc2O3 and HfO2 thin films(Colorado State University. Libraries, 2016) Langston, Peter F., author; Menoni, Carmen, advisor; Rocca, Jorge, committee member; Marconi, Mario, committee member; Yalin, Azer, committee memberThis work is a study of the influence of process parameter variations on the material properties and laser damage performance of ion beam sputtered Sc2O3 and HfO2 thin films using a Vecco Spector ion deposition system. These parameters were explored for the purpose of identifying optically sensitive defects in these high index materials after the deposition process. Using a host of optical metrology and materials analysis techniques we report on the relationship between oxygen partial pressure in the deposition chamber during film growth and optical absorption in the grown material at 1 μm. These materials were found to be prone to excess oxygen incorporation. Positive identification of this excess oxygen is made and exactly how this oxygen is bound in the different materials is discussed. The influence of this defect type on the optical and mechanical properties of the material is also given and discussed. Laser damage results for these single layers are presented. The influence of higher and lower deposition energy was also studied to determine the potential for defect creation both at the surface and in the bulk of the material grown. Optimized thin films of HfO2, Sc2O3 and Ta2O5 were grown and tested for laser damage with a 1030 nm laser having a pulse width of ~375 ps and a nominal spot size of ~100 um FWHM. The laser damage threshold ranking of these materials followed fairly well with the band gap of the material when tested in air. When these same materials were tested in vacuum Sc2O3 was found to be very susceptible to vacuum mediated laser induced surface defect creation resulting in a greatly reduced LIDT performance. Ta2O5 showed much the same trend in that its in vacuum performance was significantly reduced from its in air performance but there was not as great of a difference between the in air and in vacuum performance as there was for Sc2O3. HfO2 also showed a large reduction in its in vacuum LIDT results compared with its in air LIDT values however, this material showed the smallest decrease of the three high index materials tested. A second contribution of this work is in the investigation of the impact of capping layers on the in air and in vacuum LIDT performance of single layer films. Ultra thin capping layers composed of different metal oxides were applied to 100 nm thick single layers of the same high index materials already tested, HfO2, Sc2O3 and Ta2O5. These capped samples were then LIDT tested in air and in vacuum. These ultra thin capping layers were shown to greatly influence the in air and in vacuum damage performance of the uncapped single layers. Damage probability curves were analyzed to retrieve surface and bulk defect densities as a function of local fluence. Methods for maximizing the LIDT performance of metal oxides based on our studied materials for use in air and in vacuum are discussed.Item Open Access Accelerated adaptive numerical methods for computational electromagnetics: enhancing goal-oriented approaches to error estimation, refinement, and uncertainty quantification(Colorado State University. Libraries, 2022) Harmon, Jake J., author; Notaroš, Branislav M., advisor; Estep, Don, committee member; Ilić, Milan, committee member; Oprea, Iuliana, committee memberThis dissertation develops strategies to enhance adaptive numerical methods for partial differential equation (PDE) and integral equation (IE) problems in computational electromagnetics (CEM). Through a goal-oriented emphasis, with a particular focus on scattered field and radar cross-section (RCS) quantities of interest (QoIs), we study automated acceleration techniques for the analysis of scattering targets. A primary contribution of this work, we propose an error prediction refinement strategy, which, in addition to providing rigorous global error estimates (as opposed to just error indicators), promotes equilibration of local error contribution estimates, a key requirement of efficient discretizations. Furthermore, we pursue consistent exponential convergence of the QoIs with respect to the number of degrees of freedom without prior knowledge of the solution behavior (whether smooth or otherwise) or the sensitivity of the QoIs to the discretization quality. These developments, in addition to supporting significant reductions in computation time for high accuracy, offer enhanced confidence in simulation results, promoting, therefore, higher quality decision making and design. Moreover, aside from the need for rigorous error estimation and fully automated discretization error control, practical simulations necessitate a study of uncertain effects arising, for example, from manufacturing tolerances. Therefore, by repeating the emphasis on the QoI, we leverage the computational efforts expended in error estimation and adaptive refinement to relate perturbations in the model to perturbations of the QoI in the context of applications in CEM. This combined approach permits simultaneous control of deterministic discretization error and its effect on the QoI as well as a study of the QoI behavior in a statistical sense. A substantial implementation infrastructure undergirds the developments pursued in this dissertation. In particular, we develop an approach to conducting flexible refinements capable of tuning both local spatial resolution ($h$-refinements) and enriching function spaces ($p$-refinements) for vector finite elements. Based on a superposition of refinements (as opposed to traditional refinement-by-replacement), the presented $hp$-refinement paradigm drastically reduces implementation overhead, permits straightforward representation of meshes of arbitrary irregularity, and retains the potential for theoretically optimal rates of convergence even in the presence of singularities. These developments amplify the utility of high-quality error estimation and adaptive refinement mechanisms by facilitating the insertion of new degrees of freedom with surgical precision in CEM applications. We apply the proposed methodologies to a strong set of canonical targets and benchmarks in electromagnetic scattering and the Maxwell eigenvalue problem. While directed at time-harmonic excitations, the proposed methods readily apply to other problems and applications in applied mathematics.Item Open Access Accurate dimension reduction based polynomial chaos approach for uncertainty quantification of high speed networks(Colorado State University. Libraries, 2018) Krishna Prasad, Aditi, author; Roy, Sourajeey, advisor; Pezeshki, Ali, committee member; Notaros, Branislav, committee member; Anderson, Charles, committee memberWith the continued miniaturization of VLSI technology to sub-45 nm levels, uncertainty in nanoscale manufacturing processes and operating conditions have been found to translate into unpredictable system-level behavior of integrated circuits. As a result, there is a need for contemporary circuit simulation tools/solvers to model the forward propagation of device level uncertainty to the network response. Recently, techniques based on the robust generalized polynomial chaos (PC) theory have been reported for the uncertainty quantification of high-speed circuit, electromagnetic, and electronic packaging problems. The major bottleneck in all PC approaches is that the computational effort required to generate the metamodel scales in a polynomial fashion with the number of random input dimensions. In order to mitigate this poor scalability of conventional PC approaches, in this dissertation, a reduced dimensional PC approach is proposed. This PC approach is based on using a high dimensional model representation (HDMR) to quantify the relative impact of each dimension on the variance of the network response. The reduced dimensional PC approach is further extended to problems with mixed aleatory and epistemic uncertainties. In this mixed PC approach, a parameterized formulation of analysis of variance (ANOVA) is used to identify the statistically significant dimensions and subsequently perform dimension reduction. Mixed problems are however characterized by far greater number of dimensions than purely epistemic or aleatory problems, thus exacerbating the poor scalability of PC expansions. To address this issue, in this dissertation, a novel dimension fusion approach is proposed. This approach fuses the epistemic and aleatory dimensions within the same model parameter into a mixed dimension. The accuracy and efficiency of the proposed approaches are validated through multiple numerical examples.Item Open Access Acoustic monitoring system for frog population estimation using in-situ progressive learning(Colorado State University. Libraries, 2013) Aboudan, Adam, author; Azimi-Sadjadi, Mahmood R., advisor; Fristrup, Kurt, committee member; Peterson, Christopher, committee memberFrog populations are considered excellent bio-indicators and hence the ability to monitor changes in their populations can be very useful for ecological research and environmental monitoring. This thesis presents a new population estimation approach based on the recognition of individual frogs of the same species, namely the Pseudacris Regilla (Pacific Chorus Frog), which does not rely on the availability of prior training data. An in-situ progressive learning algorithm is developed to determine whether an incoming call belongs to a previously detected individual frog or a newly encountered individual frog. A temporal call overlap detector is also presented as a pre-processing tool to eliminate overlapping calls. This is done to prevent the degrading of the learning process. The approach uses Mel-frequency cepstral coefficients (MFCCs) and multivariate Gaussian models to achieve individual frog recognition. In the first part of this thesis, the MFCC as well as the related linear predictive cepstral coefficients (LPCC) acoustic feature extraction processes are reviewed. The Gaussian mixture models (GMM) are also reviewed as an extension to the classical Gaussian modeling used in the proposed approach. In the second part of this thesis, the proposed frog population estimation system is presented and discussed in detail. The proposed system involves several different components including call segmentation, feature extraction, overlap detection, and the in-situ progressive learning process. In the third part of the thesis, data description and system performance results are provided. The process of synthetically generating test sequences of real frog calls, which are applied to the proposed system for performance analysis, is described. Also, the results of the system performance are presented which show that the system is successful in distinguishing individual frogs, hence capable of providing reasonable estimates of the frog population. The system can readily be transitioned for the purpose of actual field studies.Item Open Access Acoustic tomography of the atmosphere using iterated unscented Kalman filter(Colorado State University. Libraries, 2012) Kolouri, Soheil, author; Azimi-Sadjadi, Mahmood R., advisor; Chong, Edwin K. P., committee member; Cooley, Daniel S., committee memberTomography approaches are of great interests because of their non-intrusive nature and their ability to generate a significantly larger amount of data in comparison to the in-situ measurement method. Acoustic tomography is an approach which reconstructs the unknown parameters that affect the propagation of acoustic rays in a field of interest by studying the temporal characteristics of the propagation. Acoustic tomography has been used in several different disciplines such as biomedical imaging, oceanographic studies and atmospheric studies. The focus of this thesis is to study acoustic tomography of the atmosphere in order to reconstruct the temperature and wind velocity fields in the atmospheric surface layer using the travel-times collected from several pairs of transmitter and receiver sensors distributed in the field. Our work consists of three main parts. The first part of this thesis is dedicated to reviewing the existing methods for acoustic tomography of the atmosphere, namely statistical inversion (SI), time dependent statistical inversion (TDSI), simultaneous iterative reconstruction technique (SIRT), and sparse recovery framework. The properties of these methods are then explained extensively and their shortcomings are also mentioned. In the second part of this thesis, a new acoustic tomography method based on Unscented Kalman Filter (UKF) is introduced in order to address some of the shortcomings of the existing methods. Using the UKF, the problem is cast as a state estimation problem in which the temperature and wind velocity fields are the desired states to be reconstructed. The field is discretized into several grids in which the temperature and wind velocity fields are assumed to be constant. Different models, namely random walk, first order 3-D autoregressive (AR) model, and 1-D temporal AR model are used to capture the state evolution in time-space . Given the time of arrival (TOA) equation for acoustic propagation as the observation equation, the temperature and wind velocity fields are then reconstructed using a fixed point iterative UKF. The focus in the third part of this thesis is on generating a meaningful synthetic data for the temperature and wind velocity fields to test the proposed algorithms. A 2-D Fractal Brownian motion (fBm)-based method is used in order to generate realizations of the temperature and wind velocity fields. The synthetic data is generated for 500 subsequent snapshots of wind velocity and temperature field realizations with spatial resolution of one meter and temporal resolution of 12 seconds. Given the location of acoustic sensors the TOA&rsquos are calculated for all the acoustic paths. In addition, white Gaussian noise is added to the calculated TOAs in order to simulate the measurement error. The synthetic data is then used to test the proposed method and the results are compared to those of the TDSI method. This comparison attests to the superiority of the proposed method in terms of accuracy of reconstruction, real-time processing and the ability to track the temporal changes in the data.Item Embargo Advanced processing of dual polarization weather radar signal(Colorado State University. Libraries, 2022) Haran, Shweta, author; Chandrasekar, V., advisor; Chen, Haonan, committee member; Siller, Thomas, committee memberThis research focuses on processing of radar data in spectral domain and analysis of micro-physical properties of hail and rain in severe convective and stratiform storms. This research also discusses the optimization of a parametric time domain method to separate cloud and drizzle data. The microphysical and kinematic properties of hydrometeors present in a precipitation event can be studied using spectral domain processing and analysis of the radar moments. This study along with polarimetric information is called spectral polarimetry. For this study, the observations made by CSU-CHIVO (Colorado State University - C-band Hydrometeorological Instrument for Volumetric Observation) radar during the RELAMPAGO (Remote sensing of Electrification, Lightning, And Mesoscale/Microscale Processes with Adaptive Ground Observations) campaign is utilized. Features such as the slope in differential reflectivity, spectrum width, and spectral copolar correlation are studied which gives a better understanding of the storm microphysics. In this thesis, microphysical properties of different types of hydrometeors such as hail, rain, and large drops are studied using convective and stratiform storm observations. A parametric time-domain method (PTDM) is utilized for the separation of cloud and drizzle data. To reduce the time latency present in processing the data, the processing code is optimized by deploying on a high-performance computer (HPC). The processing code is tested on an HPC and automated to handle errors in processing. The run time is reduced by approximately 50%, hence increasing the data processing efficiency. This study shows that optimization of the run time using an HPC is an efficient method. Data processing using an HPC can be used to deploy similar time-consuming algorithms, hence increasing the efficiency and performance.Item Embargo Advanced solutions for rainfall estimation over complex terrain in the San Francisco Bay area(Colorado State University. Libraries, 2023) Biswas, Sounak Kumar, author; Chandrasekar, V., advisor; Cheney, Margaret, committee member; Gooch, Steven, committee member; James, Susan, committee memberFresh water is an increasingly scarce resource in the western United States and effective management and prediction of flooding and drought have a direct economic impact on almost all aspects of society. Therefore it is critical to monitor and predict water inputs into the hydrological cycle of the Western United States (US). The complex topography of the western US poses a significant challenge in developing physically realistic and spatially accurate estimates of precipitation using remote sensing techniques. The intricate landscape presents a challenging observing environment for weather radar systems. This is further compounded by the complex microphysical processes during the cool season which are influenced by coastal air-sea interactions, as well as orographic effects along the coastal regions of the West. The placement and density of operational National Weather Service (NWS) radars (popularly known as NEXRAD or WSR-88D) pose a challenge in meeting the needs for water resource management in the western US due to the complex terrain of the region. Consequently, areas like the San Francisco Bay Area could use enhanced precipitation monitoring, in terms of amount and type, along watersheds and surrounding rivers and streams. Shorter wavelength radars such as X-Band radar systems are able to augment the WSR-88D network, to observe better the lower atmosphere with higher temporal and spatial resolution. This research investigates and documents the challenges of precipitation monitoring by radars over complex terrain and aims to provide effective and advanced solutions for accurate Quantitative Precipitation Estimation (QPE) using both WSR-88D and the gap-filling X-Band radar systems over the Bay Area on the US West Coast, with a focus on the cool season. Specifically, this study focuses on a precipitation microphysics perspective, aiming to create an algorithm capable of distinguishing orographically enhanced rainfall from cool-season stratiform rainfall using X-Band radar observations. A radar-based rainfall estimator is developed to increase the accuracy of rainfall quantification. Additionally, various other scientific and engineering challenges have been addressed including radar calibration, attenuation correction of the radar beam, radar beam blockage due to terrain, and correction of measurements of the vertical profiles of radar observables. The final QPE product is constructed by merging the X-Band based QPE product with the operational NEXRAD based QPE product, significantly enhancing the overall quality of rainfall mapping within the Bay Area. Case studies reveal that the new product is able to improve QPE accuracy by ~70% in terms of mean absolute error and root mean squared error compared to the operational products. This establishes the overall need for precipitation monitoring by gap-filling X-Band radar systems in the complex terrain of the San Francisco Bay Area.