Browsing by Author "Pezeshki, Ali, committee member"
Now showing 1 - 20 of 52
Results Per Page
Sort Options
Item Open Access A multi-task learning method using gradient descent with applications(Colorado State University. Libraries, 2021) Larson, Nathan Dean, author; Azimi-Sadjadi, Mahmood R., advisor; Pezeshki, Ali, committee member; Oprea, Iuliana, committee memberThere is a critical need to develop classification methods that can robustly and accurately classify different objects in varying environments. Each environment in a classification problem can contain its own unique challenges which prevent traditional classifiers from performing well. To solve classification problems in different environments, multi-task learning (MTL) models have been applied that define each environment as a separate task. We discuss two existing MTL algorithms and explain how they are inefficient for situations involving high-dimensional data. A gradient descent-based MTL algorithm is proposed which allows for high-dimensional data while providing accurate classification results. Additionally, we introduce a kernelized MTL algorithm which may allow us to generate nonlinear classifiers. We compared our proposed MTL method with an existing method, Efficient Lifelong Learning Algorithm (ELLA), by using them to train classifiers on the underwater unexploded ordnance (UXO) and extended modified National Institute of Standards and Technology (EMNIST) datasets. The UXO dataset contained acoustic color features of low-frequency sonar data. Both real data collected from physical experiments as well as synthetic data were used forming separate environments. The EMNIST digits dataset contains grayscale images of handwritten digits. We used this dataset to show how our proposed MTL algorithm performs when used with more tasks than are in the UXO dataset. Our classification experiments showed that our gradient descent-based algorithm resulted in improved performance over those of the traditional methods. The UXO dataset had a small improvement while the EMNIST dataset had a much larger improvement when using our MTL algorithm compared to ELLA and the single task learning method.Item Open Access A recursive least squares training approach for convolutional neural networks(Colorado State University. Libraries, 2022) Yang, Yifan, author; Azimi-Sadjadi, Mahmood, advisor; Pezeshki, Ali, committee member; Oprea, Iuliana, committee memberThis thesis aims to come up with a fast method to train convolutional neural networks (CNNs) using the application of the recursive least squares (RLS) algorithm in conjunction with the back-propagation learning. In the training phase, the mean squared error (MSE) between the actual and desired outputs is iteratively minimized. The recursive updating equations for CNNs are derived via the back-propagation method and normal equations. This method does not need the choice of a learning rate and hence does not suffer from speed-accuracy trade-off. Additionally, it is much faster than the conventional gradient-based methods in a sense that it needs less epochs to converge. The learning curves of the proposed method together with those of the standard gradient-based methods using the same CNN structure are generated and compared on the MNIST handwritten digits and Fashion-MNIST clothes databases. The simulation results show that the proposed RLS-based training method requires only one epoch to meet the error goal during the training phase while offering comparable accuracy on the testing data sets.Item Open Access Accurate dimension reduction based polynomial chaos approach for uncertainty quantification of high speed networks(Colorado State University. Libraries, 2018) Krishna Prasad, Aditi, author; Roy, Sourajeey, advisor; Pezeshki, Ali, committee member; Notaros, Branislav, committee member; Anderson, Charles, committee memberWith the continued miniaturization of VLSI technology to sub-45 nm levels, uncertainty in nanoscale manufacturing processes and operating conditions have been found to translate into unpredictable system-level behavior of integrated circuits. As a result, there is a need for contemporary circuit simulation tools/solvers to model the forward propagation of device level uncertainty to the network response. Recently, techniques based on the robust generalized polynomial chaos (PC) theory have been reported for the uncertainty quantification of high-speed circuit, electromagnetic, and electronic packaging problems. The major bottleneck in all PC approaches is that the computational effort required to generate the metamodel scales in a polynomial fashion with the number of random input dimensions. In order to mitigate this poor scalability of conventional PC approaches, in this dissertation, a reduced dimensional PC approach is proposed. This PC approach is based on using a high dimensional model representation (HDMR) to quantify the relative impact of each dimension on the variance of the network response. The reduced dimensional PC approach is further extended to problems with mixed aleatory and epistemic uncertainties. In this mixed PC approach, a parameterized formulation of analysis of variance (ANOVA) is used to identify the statistically significant dimensions and subsequently perform dimension reduction. Mixed problems are however characterized by far greater number of dimensions than purely epistemic or aleatory problems, thus exacerbating the poor scalability of PC expansions. To address this issue, in this dissertation, a novel dimension fusion approach is proposed. This approach fuses the epistemic and aleatory dimensions within the same model parameter into a mixed dimension. The accuracy and efficiency of the proposed approaches are validated through multiple numerical examples.Item Open Access Air pollutant source estimation from sensor networks(Colorado State University. Libraries, 2024) Thakur, Tanmay, author; Lear, Kevin, advisor; Pezeshki, Ali, committee member; Carter, Ellison, committee memberA computationally efficient model for the estimation of unknown source parameters using the Gaussian plume model, linear least square optimization, and gradient descent is presented in this work. This thesis discusses results for simulations of a two-dimensional field using advection-diffusion equations underlining the benefits of plume solutions when compared to other methods. The Gaussian plume spread for pollutant concentrations has been studied in this work and modeled in Matlab to estimate the pollutant concentration at various wireless sensor locations. To set up the model simulations, we created a field in Matlab with several pollutant-measuring sensors and one or two pollutant-emitting sources. The forward model estimated the concentration measured at the sensors when the sources emit the pollutants. These pollutants were programmed in Matlab to follow Gaussian plume equations while spreading. The initial work estimated the concentration of the pollutants with varying sensor noise, wind speed, and wind angles. The varying noise affects the sensors' readings whereas the wind speed and wind angle affect the plume shape. The forward results are then applied to solving the inverse problem to determine the possible sources and pollutant emission rates in the presence of additive white Gaussian noise (AWGN). A vector of possible sources within a region of interest is minimized using L2 minimization and gradient descent methods. Initially, the input to the inverse model is random a guess for the source location coordinates. Then, initial values for the source emission rates are calculated using the linear least squares method since the sensor readings are proportional to the source emission rates. The accuracy of this model is calculated by comparing the predicted source locations with the true locations of the sources. The cost function reaches a minimum value when the predicted sensor concentrations are close to the true concentration values. The model continues to minimize the cost function until it remains fairly constant. The inverse model is initially developed for a single source and later developed for two sources. Different configurations for the number of sources and locations of the sensors are considered in the inverse model to evaluate the accuracy. After verifying the inverse algorithm with synthetic data, we then used the algorithm to estimate the source of pollution with real air pollution sensor data collected by Purple Air sensors. For this problem, we extracted data from Purpleair.com from 4 sensors around the Woolsey forest fire area in California in 2018 and used its data as input to the inverse model. The predictions suggested the source was located close to the true high-intensity forest fire in that area. Later, we apply a neural network method to estimate the source parameters and compare estimates of the neural network with the results from the inverse problem using the physical model for the synthetic data. The neural vii model uses sequential neural network techniques for training, testing, and predicting the source parameters. The model was trained with sensor concentration readings, source locations, wind speeds, wind angles, and corresponding source emission rates. The model was tested using the testing data set to compare the predictions with the true source locations and emission rates. The training and testing data were subjected to feature engineering practices to improve the model's accuracy. To improve the accuracy of the model different configurations of activation functions, batch size, and epoch size were used. The neural network model was able to obtain an accuracy above 90% in predicting the source emission rates and source locations. This accuracy varied depending upon the type of configuration used such as single source, multiple sources, number of sensors, noise levels, wind speed, and wind angle used. In the presence of sensor noise, the neural network model was more accurate than the physical inverse model in predicting the source location based on a comparison of R2 scores for fitting the predicted source location to the true source location. Further work on this model's accuracy will help the development of a real-time air quality wireless sensor network application with automatic pollutant source detection.Item Open Access Anchor centric virtual coordinate systems in wireless sensor networks: from self-organization to network awareness(Colorado State University. Libraries, 2012) Dhanapala, Dulanjalie C., author; Jayasumana, Anura P., advisor; Kirby, Michael, committee member; Pezeshki, Ali, committee member; Ray, Indrakshi, committee memberFuture Wireless Sensor Networks (WSNs) will be collections of thousands to millions of sensor nodes, automated to self-organize, adapt, and collaborate to facilitate distributed monitoring and actuation. They may even be deployed over harsh geographical terrains and 3D structures. Low-cost sensor nodes that facilitate such massive scale networks have stringent resource constraints (e.g., in memory and energy) and limited capabilities (e.g., in communication range and computational power). Economic constraints exclude the use of expensive hardware such as Global Positioning Systems (GPSs) for network organization and structuring in many WSN applications. Alternatives that depend on signal strength measurements are highly sensitive to noise and fading, and thus often are not pragmatic for network organization. Robust, scalable, and efficient algorithms for network organization and reliable information exchange that overcome the above limitations without degrading the network's lifespan are vital for facilitating future large-scale WSN networks. This research develops fundamental algorithms and techniques targeting self-organization, data dissemination, and discovery of physical properties such as boundaries of large-scale WSNs without the need for costly physical position information. Our approach is based on Anchor Centric Virtual Coordinate Systems, commonly called Virtual Coordinate Systems (VCSs), in which each node is characterized by a coordinate vector of shortest path hop distances to a set of anchor nodes. We develop and evaluate algorithms and techniques for the following tasks associated with use of VCSs in WSNs: (a) novelty analysis of each anchor coordinate and compressed representation of VCSs; (b) regaining lost directionality and identifying a 'good' set of anchors; (c) generating topology preserving maps (TPMs); (d) efficient and reliable data dissemination, and boundary identification without physical information; and (f) achieving network awareness at individual nodes. After investigating properties and issues related to VCS, a Directional VCS (DVCS) is proposed based on a novel transformation that restores the lost directionality information in VCS. Extreme Node Search (ENS), a novel and efficient anchor placement scheme, starts with two randomly placed anchors and then uses this directional transformation to identify the number and placement of anchors in a completely distributed manner. Furthermore, a novelty-filtering-based approach for identifying a set of 'good' anchors that reduces the overhead and power consumption in routing is discussed. Physical layout information such as physical voids and even relative physical positions of sensor nodes with respect to X-Y directions are absent in a VCS description. Obtaining such information independent of physical information or signal strength measurements has not been possible until now. Two novel techniques to extract Topology Preserving Maps (TPMs) from VCS, based on Singular Value Decomposition (SVD) and DVCS are presented. A TPM is a distorted version of the layout of the network, but one that preserves the neighborhood information of the network. The generalized SVD-based TPM scheme for 3D networks provides TPMs even in situations where obtaining accurate physical information is not possible. The ability to restore directionality and topology-based Cartesian coordinates makes VCS competitive and, in many cases, a better alternative to geographic coordinates. This is demonstrated using two novel routing schemes in VC domain that outperform the well-known physical information-based routing schemes. The first scheme, DVC Routing (DVCR) uses the directionality recovered by DVCS. Geo-Logical Routing (GLR) is a technique that combines the advantages of geographic and logical routing to achieve higher routability at a lower cost by alternating between topology and virtual coordinate spaces to overcome local minima in the two domains. GLR uses topology domain coordinates derived solely from VCS as a better alternative for physical location information. A boundary detection scheme that is capable of identifying physical boundaries even for 3D surfaces is also proposed. "Network awareness" is a node's cognition of its neighborhood, its position in the network, and the network-wide status of the sensed phenomena. A novel technique is presented whereby a node achieves network awareness by passive listening to routine messages associated with applications in large-scale WSNs. With the knowledge of the network topology and phenomena distribution, every node is capable of making solo decisions that are more sensible and intelligent, thereby improving overall network performance, efficiency, and lifespan. In essence, this research has laid a firm foundation for use of Anchor Centric Virtual Coordinate Systems in WSN applications, without the need for physical coordinates. Topology coordinates, derived from virtual coordinates, provide a novel, economical, and in many cases, a better alternative to physical coordinates. A novel concept of network awareness at nodes is demonstrated.Item Open Access Applications of digital adaptive filters to time-resolved optical microscopy(Colorado State University. Libraries, 2020) Gupta, Saurabh, author; Wilson, Jesse W., advisor; Pezeshki, Ali, committee member; Thamm, Douglas, committee memberPhosphorescence lifetime imaging is used on several fronts, such as, skin cancer or melanoma diagnosis, and estimation of tissue oxygenation among others. Oxygen profiling is critical for mapping brain activity, apart from its use to monitor several metabolic activities, and often employs oxygen tagging molecules/probes. In this work, we describe a novel technique to recover phosphorescence lifetime using a real-time digital adaptive filter running on a field-programmable gate array (FPGA) and conclude with an important takeaway. We also describe our strategy to mitigate relative intensity noise (RIN) in ultrafast fiber lasers, which are an attractive alternative to bulk lasers for non-linear optical microscopy due to their compactness and low cost. The high RIN of these lasers poses a challenge for pump-probe measurements such as transient absorption and stimulated Raman scattering, along with modalities that provide label-free contrast from the vibrational and electronic structure of molecules. Our real-time approach for RIN suppression uses a digital adaptive noise canceller implemented on a FPGA. We demonstrate its application to transient absorption spectroscopy and microscopy and show compatibility with a commercial lock-in amplifier. Lastly, we report the noise estimates specific to our current setup.Item Open Access Attenuation correction of X-band polarimetric Doppler weather radar signals: application to systems with high spatio-temporal resolution(Colorado State University. Libraries, 2015) Gálvez, Miguel Bustamante, author; Bringi, V. N., advisor; Colom-Ustariz, Jose G., advisor; Jayasumana, Anura, committee member; Pezeshki, Ali, committee member; Mielke, Paul W., committee memberIn the last decade the atmospheric science community has seen widespread and successful application of X-band dual-polarization weather radars for measuring precipitation in the lowest 2 km of the troposphere. These X-band radars have the advantage of a smaller footprint, lower cost, and improved detection of hydrometeors due to increased range resolution. In recent years, the hydrology community began incorporating these radars in novel applications to study the spatio-temporal variability of rainfall from precipitation measurements near the ground, over watersheds of interest. The University of Iowa mobile XPOL radar system is one of the first to be used as an X-band polarimetric radar network dedicated to hydrology studies. During the spring of 2013, the Iowa XPOL radars participated in NASA Global Precipitation Measurement's (GPM) first field campaign focused solely on hydrology studies, called the Iowa Flood Studies (IFloodS). Weather radars operating in the 3.2 cm (X-band) regime can suffer from severe attenuation, particularly in heavy convective storms. This has led to the development of sophisticated algorithms for X-band radars to correct the meteorological observables for attenuation. This is especially important for higher range resolution hydrology-specific X-band weather radars, where the attenuation correction aspect remains relatively unexamined. This research studies the problem of correcting for precipitation-induced attenuation in X-band polarimetric weather radars with high spatio-temporal resolution for hydrological applications. We also examine the variability in scattering simulations obtained from the drop spectra measured by two dimensional video disdrometers (2DVD) located in different climatic and geographical locations. The 2DVD simulations provide a ground truth for various relations (e.g., AH-KDP and AH-ADP) applied to our algorithms for estimating attenuation, and ultimately correcting for it to provide improved rain rates and hydrometeor identification. We developed a modified ZPHI attenuation correction algorithm, with a differential phase constraint, and tuned it for the high resolution IFloodS data obtained by the Iowa XPOL radars. Although this algorithm has good performance in pure rain events, it is difficult to fully correct for attenuation and differential attenuation near the melting layer where a mixed phase of rain and melting snow or graupel exists. To identify these regions, we propose an improved iterative FIR range filtering technique, as first presented by Hubbert and Bringi (1995), to better estimate the differential backscatter phase, δ, due to Mie scattering at X-band from mixed phase precipitation. In addition, we investigate dual-wavelength algorithms to directly estimate the α and β coefficients, of the AH = αKDP and ADP = βKDP relations, to obtain the path integrated attenuation due to rain and wet ice or snow in the region near the melting layer. We use data from the dual-wavelength, dual-polarization CSU-CHILL S-/X-band Doppler weather radar for analyzing the coefficients and compare their variability as a function of height, where the hydrometeors are expected to go through a microphysical transformation as they fall, starting as snow or graupel/hail then melting into rain or a rain-hail mixture. The S-band signal is un-attenuated and so forms a reference for estimating the X-band attenuation and differential attenuation. We present the ranges of the α and β coefficients in these varying precipitation regimes to help improve KDP-based attenuation correction algorithms at X-band as well as rain rate algorithms based on the derived AH.Item Open Access Big Data decision support system(Colorado State University. Libraries, 2022) Ma, Tian J., author; Chong, Edwin, advisor; Simske, Steve, committee member; Herber, Daniel, committee member; Pezeshki, Ali, committee memberEach day, the amount of data produced by sensors, social and digital media, and Internet of Things is rapidly increasing. The volume of digital data is expected to be doubled within the next three years. At some point, it might not be financially feasible to store all the data that is received. Hence, if data is not analyzed as it is received, the information collected could be lost forever. Actionable Intelligence is the next level of Big Data analysis where data is being used for decision making. This thesis document describes my scientific contribution to Big Data Actionable Intelligence generations. Chapter 1 consists of my colleagues and I's contribution in Big Data Actionable Intelligence Architecture. The proven architecture has demonstrated to support real-time actionable intelligence generation using disparate data sources (e.g., social media, satellite, newsfeeds). This work has been published in the Journal of Big Data. Chapter 2 shows my original method to perform real-time detection of moving targets using Remote Sensing Big Data. This work has also been published in the Journal of Big Data and it has received an issuance of a U.S. patent. As the Field-of-View (FOV) in remote sensing continues to expand, the number of targets observed by each sensor continues to increase. The ability to track large quantities of targets in real-time poses a significant challenge. Chapter 3 describes my colleague and I's contribution to the multi-target tracking domain. We have demonstrated that we can overcome real-time tracking challenges when there are large number of targets. Our work was published in the Journal of Sensors.Item Open Access Block-based detection methods for underwater target detection and classification from electro-optical imagery(Colorado State University. Libraries, 2010) Kabatek, Michael Jonathan, author; Azimi-Sadjadi, Mahmood R., advisor; Pezeshki, Ali, committee member; Wu, Mingzhong, committee memberDetection and classification of underwater mine-like objects is a complicated problem due to various factors such as variations in the operating and environmental conditions, presence of spatially varying clutter, target obstruction and occlusion variations in target shapes, compositions, and orientation. Also contributing to the difficulty of the problem is the lack of a priori knowledge about the shape and geometry of new non-mine-like objects that may be encountered, as well as changes in the environmental or operating conditions encountered during data collection. Two different block-based methods are proposed for detecting frames and localization of mine-like objects from a new CCD-based Electro-optical (EO) imaging system. The block-based methods proposed in this study serve as an excellent tool for detection in low contrast frame sequences, as well as providing means for classifying detected objects as target or non-target objects. The detection methods employed provide frame location, automatic object segmentation, and accurate spatial locations of detected objects. The problem studied in this work is the detection of mine-like objects from a new CCD imagery data set which consists of runs containing tens to hundreds of frames (taken by the CCD camera). The goal is to detect frames containing mine-like objects, as well as locating detected objects and segmenting them from the frame to be subsequently classified as mine-like objects or background clutter. While object segmentation and classification of detected objects are also required as with the previous EO systems, the main challenge is successful frame detection with low false alarm rate. This has prompted research on new detection methods which utilize block- based snapshot information in order to identify potential frames containing targets, and spatially localize detected objects within those detected frames. More specifically, we have addressed CCD object detection problem by developing block-based Gauss-Gauss and matched subspace formulations. The block-based detection framework is applied to raw CCD data directly from the sensor without the need for computationally expensive filtering or pre-processing as with the previous methods. The detector operates by measuring the log-likelihood ratio in each block of a given frame and provides a spatial 'likelihood map'. This detection process pro- vides log-likelihood measurements of blocks in a given EO image which can then be thresholded to generate regions of interest within frame to be subsequently classified. This two-step process in both the Gauss-Gauss and matched subspace detectors consists of first measuring the log-likelihood, and determining frame of interest and then the regions of interest (ROI), and finally classifying the detected object ROIs, based upon shape-dependent features. Complex Zernike moments are extracted from each region of interest which are subsequently used to classify detected objects. The shape-based Zernike moments provide rotational invariance, and robustness to noise which are desirable characteristics for classification. This block-based framework provides flexibility in the detection methods used prior to object classification, and solves the problem of having to invoke a classification system on every CCD frame by determining frames containing only potential targets. A comprehensive study of the block-based detection and classification methods is carried out on a CCD imagery data set. A comparison is made on the detection and false alarm rate performance for the Gauss-Gauss and matched subspace detectors on the CCD data sets acquired from the Applied Signal Technologies in Sunnyvale, CA. In addition a neural-network based classification system is employed to perform object classification based upon the extracted Zernike moments. The tested data set from AST consist of ten runs over the mine field each run containing up to several hundred frames. The total number of frames tested totals 1317, with 16 frames containing a single or partial targets in five of the data runs. Results illustrating the effectiveness of the proposed detection methods are presented in terms of correct detection and false alarm rates. It is observed that the low-rank Gauss-Gauss detector provides an overall frame detection rate of 100% at the cost of a false alarm rate of 36.9%. The matched subspace detector outperforms the Gauss-Gauss method and reduces the false frame detection rate by 16.9%. Using the Zernike features extracted from the matched subspace detector's output and an artificial neural network classifier yields a true frame detection rate of Pd = 100% at the cost of Pfd = 16:8% reducing the detected false frames detected by 3.3%. The reduced-rank Gauss-Gauss detector has a detection rate of Pd = 100% at the cost of probability of false detection Pfd = 36:9%, using features extracted from the reduced-rank Gauss-Gauss detector's output passed to the neural network classifier yields a true detection rate of Pd = 100% at the cost of Pfd = 21:7% which significantly reduces the detected false frames by 15.1%.Item Open Access Characterization of a photoluminescence-based fiber optic sensor system(Colorado State University. Libraries, 2011) Yi, Zhangjing, author; Lear, Kevin L., advisor; Pezeshki, Ali, committee member; Mueller, Jennifer L., committee memberMeasuring multiple analyte concentrations is essential for a wide range of environmental applications, which are important for the pursuit of public safety and health. Target analytes are often toxic chemical compounds found in groundwater or soil. However, in-situ measurement of such analytes still faces various challenges. Some of these challenges are rapid response for near-real time monitoring, simultaneous measurements of multiple analytes in a complex target environment, and high sensitivity for low analyte concentration without sample pretreatment. This thesis presents a low-cost, robust, multichannel fiber optic photoluminescence (PL)-based sensor system using a time-division multiplexing architecture for multiplex biosensor arrays for in-situ measurements in environmental applications. The system was designed based upon an indirect sensing scheme with a pH or oxygen sensitive dye molecules working as the transducer that is easily adaptable with various enzymes for detecting different analytes. A characterization of the multi-channel fiber optic PL-based sensor system was carried out in this thesis. Experiments were designed with interests in investigating this system's performance with only the transducer thus providing reference figures of merit, such as sensitivity and limit of detection, for further experiments or applications with the addition of various biosensors. A pH sensitive dye, fluoresceinamine (FLA), used as the transducer is immobilized in a poly vinyl alcohol (PVA) matrix for the characterization. The system exhibits a sensitivity of 8.66×10 5 M -1 as the Stern-Volmer constant, K SV , in H + concentration measurement range of 0.002 - 891 μM (pH of 3.05 - 8.69). A mathematical model is introduced to describe the Stern-Volmer equation's non-idealities, which are fluorophore fractional accessibility and the back reflection. Channel-to-channel uniformity is characterized with the modified Stern-Volmer model. Combining the FLA with appropriate enzymatic biosensors, the system is capable of 1,2-dichloroethane (DCA) and ethylene dibromide (EDB) detection. The calculated limit of detection (LOD) of the system can be as low as 0.08 μg/L for DCA and 0.14 μg/L for EDB. The performances of fused fiber coupler and bifurcated fiber assembly were investigated for the application in the fiber optic PL-based sensor systems in this thesis. Complex tradeoffs among back reflection noise, coupling efficiency and split ratio were analyzed with theoretical and experimental data. A series of experiments and simulations were carried out to compare the two types of fiber assemblies in the PL-based sensor systems in terms of excess loss, split ratio, back reflection, and coupling efficiency. A noise source analysis of three existing PL-intensity-based fiber optic enzymatic biosensor systems is provided to reveal the power distribution of different noise components. The three systems are a single channel system with a spectrometer as the detection device, a lab-developed multi-channel system, and a commercial prototype multi-channel system both using a photomultiplier tube (PMT) as the detection device. The thesis discusses the design differences of all three systems and some of the circuit design alteration attempts for performance improvements.Item Open Access Characterization of multiple time-varying transient sources from multivariate data sequences(Colorado State University. Libraries, 2014) Wachowski, Neil, author; Azimi-Sadjadi, Mahmood R., advisor; Breidt, F. Jay, committee member; Fristrup, Kurt, committee member; Pezeshki, Ali, committee memberCharacterization of multiple time-varying transient sources using sequential multivariate data is a broad and complex signal processing problem. In general, this process involves analyzing new observation vectors in a data stream of unknown length to determine if they contain the signatures of a source of interest (i.e., a signal), in which case the source's type and interference-free signatures may be estimated. This process may continue indefinitely to detect and classify several events of interest thereby yielding an aggregate description of the data's contents. Such capabilities are useful in numerous applications that involve continuously observing an environment containing complicated and erratic signals, e.g., habitat monitoring using acoustical data, medical diagnosis via magnetic resonance imaging, and underwater mine hunting using sonar imagery. The challenges associated with successful transient source characterization are as numerous as the application areas, and include 1) significant variations among signatures emitted by a given source type, 2) the presence of multiple types of random yet structured interference sources whose signatures are superimposed with those of signals, 3) a data representation that is not necessarily optimized for the task at hand, 4) variable environmental and operating conditions, and many others. These challenges are compounded by the inherent difficulties associated with processing sequential multivariate data, namely the inability to exploit the statistics or structure of the entire data stream. On the other hand, the complications that must be addressed often vary significantly when considering different types of data, leading to an abundance of existing solutions that are each specialized for a particular application. In other words, most existing work only simultaneously considers a subset of these complications, making them difficult to generalize. The work in this thesis was motivated by an application involving characterization of national park soundscapes in terms of commonly occurring man-made and natural acoustical sources, using streams of "1/3 octave vector'' sequences. Naturally, this application involves developing solutions that consider all of the mentioned challenges, among others. Two comprehensive solutions to this problem were developed, each with unique strengths and weaknesses relative to one another. A sequential random coefficient tracking (SRCT) method was developed first, that hierarchically applies a set of likelihood ratio tests to each incoming vector observation to detect and classify up to one signal and one interference source that may be simultaneously present. Since the signatures of each acoustical event typically span several adjacent observations, a Kalman filter is used to generate the parameters necessary for computing the likelihood values. The SRCT method is also capable of using the coefficient estimates produced by the Kalman filter to generate estimates of both the signal and interference components of the observation, thus performing separation in a dual source scenario. The main benefits of this method are its computational efficiency and its ability to characterize both components of an observation (signal and interference). To address some of the main deficiencies of the SRCT method, a sparse coefficient state tracking (SCST) approach was also developed. This method was designed to detect and classify signals when multiple types of interference are simultaneously present, while avoiding restrictive assumptions concerning the distribution of observation components. This SCST method uses generalized likelihood ratios tests to perform signal detection and classification during quiescent periods, and quiescent detection whenever a signal is present. To form these tests, the likelihood of each signal model is found given a sparse approximation of an incoming observation, which makes the temporal evolution of source signatures more tractable. Robustness to structured interference is incorporated by virtue of the inherent separation capabilities of sparse coding. Each signal model is characterized by a Bayesian network, which captures the dependencies between different coefficients in the sparse approximation under the associated hypothesis. In addition to developing two complete transient source characterization systems, this thesis also introduces several concepts and tools that may be used to aid in the development of new systems designed for similar tasks, or supplement existing ones. Of particular note are a comprehensive overview of existing general approaches for detecting changes in the parameters of sequential data streams, a new method for performing fusion of sequential classification decisions based on a hidden Markov model framework, and a detailed analysis of the 1/3 octave data format mentioned above. The latter is especially helpful since this data format is commonly used in audio analysis applications. A comprehensive study is carried out to evaluate the performance of the developed methods for detecting, classifying, and estimating the signatures of signals using 1/3 octave soundscape data that is corrupted with multiple types of structured interference. The systems are benchmarked against a Gaussian mixture model approach that was adapted to handle the complexities of the soundscape data, as such approaches are frequently used in acoustical source recognition applications. Performance is mainly measured in terms of the receiver operator characteristics (ROC) of the test statistics implemented by each method, the improvement in signal-to-noise ratio they offer when estimating signatures, and their overall ability to accurately detect and classify signals of interest. It was observed that both the SRCT and SCST methods perform exceptionally on the national park soundscape data, though the latter performs best in the presence of heavy interference and is more flexible in new environmental and operating conditions.Item Open Access Continuum limits of Markov chains with application to wireless network modeling and control(Colorado State University. Libraries, 2014) Zhang, Yang, author; Chong, Edwin K. P., advisor; Estep, Donald, committee member; Luo, J. Rockey, committee member; Pezeshki, Ali, committee memberWe investigate the continuum limits of a class of Markov chains. The investigation of such limits is motivated by the desire to model networks with a very large number of nodes. We show that a sequence of such Markov chains indexed by N , the number of components in the system that they model, converges in a certain sense to its continuum limit, which is the solution of a partial differential equation (PDE), as N goes to infinity. We provide sufficient conditions for the convergence and characterize the rate of convergence. As an application we approximate Markov chains modeling large wireless networks by PDEs. We first describe PDE models for networks with uniformly located nodes, and then generalize to networks with nonuniformly located, and possibly mobile, nodes. While traditional Monte Carlo simulation for very large networks is practically infeasible, PDEs can be solved with reasonable computation overhead using well-established mathematical tools. Based on the PDE models, we develop a method to control the transmissions in nonuniform networks so that the continuum limit is invariant under perturbations in node locations. This enables the networks to maintain stable global characteristics in the presence of varying node locations.Item Open Access Cooperative defense mechanisms for detection, identification and filtering of DDoS attacks(Colorado State University. Libraries, 2016) Mosharraf Ghahfarokhi, Negar, author; Jayasumana, Anura P., advisor; Ray, Indrakshi, advisor; Pezeshki, Ali, committee member; Malaiya, Yashwant, committee memberTo view the abstract, please see the full text of the document.Item Open Access Cooperative sensing for target estimation and target localization(Colorado State University. Libraries, 2011) Zhang, Wenshu, author; Yang, Liuqing, advisor; Pezeshki, Ali, committee member; Luo, J. Rockey, committee member; Wang, Haonan, committee memberAs a novel sensing scheme, cooperative sensing has drawn great interests in recent years. By utilizing the concept of "cooperation", which incorporates communications and information exchanges among multiple sensing devices, e.g. radar transceivers in radar systems, sensor nodes in wireless sensor networks, or mobile handsets in cellular systems, the sensing capability can achieve significant improvement compared to the conventional noncooperative mode in many aspects. For example, cooperative target estimation is inspired by the benefits of MIMO in communications, where multiple transmit and/or receive antennas can increase the diversity to combat channel fading for enhanced transmission reliability and increase the degrees of freedom for improved data rate. On the other hand, cooperative target localization is able to dramatically increase localization performance in terms of both accuracy and coverage. From the perspective of cooperative target estimation, in this dissertation, we optimize waveforms from multiple cooperative transmitters to facilitate better target estimation in the presence of colored noise. We introduce the normalized MSE (NMSE) minimizing criterion for radar waveform designs. Not only is it more meaningful for parameter estimation problems, but it also exhibits more similar behaviors with the MI criterion than its MMSE counterpart. We also study the robust designs for both the probing waveforms at the transmitter and the estimator at the receiver to address one type of a priori information uncertainties, i.e., in-band target and noise PSD uncertainties. The relationship between MI and MSEs is further investigated through analysis of the sensitivity of the optimum design to the out-band PSD uncertainties as known as the overestimation error. From the perspective of cooperative target localization, in this dissertation, we study the two phases that comprise a localization process, i.e., the distance measurement phase and the location update phase. In the first distance measurement phase, thanks to UWB signals' many desirable features including high delay resolution and obstacle penetration capabilities, we adopt UWB technology for TOA estimation, and then translate the TOA estimate into distance given light propagation speed. We develop a practical data-aided ML timing algorithm and obtain its optimum training sequence. Based on this optimum sequence, the original ML algorithm can be simplified without affecting its optimality. In the second location update phase, we investigate secure cooperative target localization in the presence of malicious attacks, which consists of a fundamental issue in localization problems. We explicitly incorporate anchors' misplacements into distance measurement model and explore the pairwise sparse nature of the misplacements. We formulate the secure localization problem as an ℓ1-regularized least squares (LS) problem and establish the pairwise sparsity upper bound which defines the largest possible number of identifiable malicious anchors. Particularly, it is demonstrated that, with target cooperation, the capability of secure localization is improved in terms of misplacement estimation and target location estimation accuracy compared to the single target case.Item Open Access Design and control of kinematically redundant robots for maximizing failure-tolerant workspaces(Colorado State University. Libraries, 2021) Bader, Ashraf M., author; Maciejewski, Anthony A., advisor; Oprea, Iuliana, committee member; Pezeshki, Ali, committee member; Young, Peter, committee memberKinematically redundant robots have extra degrees of freedom so that they can tolerate a joint failure and still complete an assigned task. Previous work has defined the "failure-tolerant workspace" as the workspace that is guaranteed to be reachable both before and after an arbitrary locked-joint failure. One mechanism for maximizing this workspace is to employ optimal artificial joint limits prior to a failure. This dissertation presents two techniques for determining these optimal artificial joint limits. The first technique is based on the gradient ascent method. The proposed technique is able to deal with the discontinuities of the gradient that are due to changes in the boundaries of the failure tolerant workspace. This technique is illustrated using two examples of three degree-of-freedom planar serial robots. The first example is an equal link length robot where the optimal artificial joint limits are computed exactly. In the second example, both the link lengths and artificial joint limits are determined, resulting in a robot design that has more than twice the failure-tolerant area of previously published locally optimal designs. The second technique presented in this dissertation is a novel hybrid technique for estimating the failure-tolerant workspace size for robots of arbitrary kinematic structure and any number of degrees of freedom performing tasks in a 6D workspace. The method presented combines an algorithm for computing self-motion manifold ranges to estimate workspace envelopes and Monte-Carlo integration to estimate orientation volumes to create a computationally efficient algorithm. This algorithm is then combined with the coordinate ascent optimization technique to determine optimal artificial joint limits that maximize the size of the failure-tolerant workspace of a given robot. This approach is illustrated on multiple examples of robots that perform tasks in 3D planar and 6D spatial workspaces.Item Open Access Design methodology and productivity improvement in high speed VLSI circuits(Colorado State University. Libraries, 2017) Hossain, KM Mozammel, author; Chen, Thomas W., advisor; Malaiya, Yashwant, committee member; Pasricha, Sudeep, committee member; Pezeshki, Ali, committee memberTo view the abstract, please see the full text of the document.Item Open Access Design of integrated on-chip impedance sensors(Colorado State University. Libraries, 2014) Kern, Tucker, author; Chen, Thomas W., advisor; Pezeshki, Ali, committee member; Tobet, Stuart, committee memberIn this thesis two integrated sensor systems for measuring the impedance of a device under test (DUT) are presented. Both sensors have potential applications in label-free affinity biosensors for biological and bio-medical analysis. The first sensor is a purely capacitive sensor that operates on the theory of capacitive division. Test capacitance is placed within a capacitive divider and produces an output voltage proportional to its value. This voltage is then converted to a timedomain signal for easy readout. The prototype capacitive sensor shows a resolution of 5 fF on a base of 500 fF, which corresponds to a 1 % resolution. The second sensor, a general purpose impedance sensor calculates the ratio between a DUT and reference impedance when stimulated by a sinusoidal signal. Computation of DUT magnitude and phase is accomplished in silicon via mixed-signal division and a phase module. An automatic gain controller (AGC) allows the sensor to measure impedance from 30 Ω to 2.5 MΩ with no more than 10 % error and a resolution of at least .44 %. Prototypes of both sensing topologies were implemented in a .18 μm CMOS process and their operation in silicon was verified. The prototype capacitive sensor required a circuit area of .014 mm2 and successfully demonstrated a resolution of 5 fF in silicon. A prototype impedance sensor without the phase module or AGC was implemented with a circuit area of .17 mm2. Functional verification of the peak capture systems and mixed-signal divider was accomplished. The complete implementation of the impedance sensor, with phase module and AGC, requires an estimated .28 mm2 of circuit area.Item Open Access Designing novel radio-frequency coils for high field and ultra-high field magnetic resonance imaging(Colorado State University. Libraries, 2021) Athalye, Pranav Shrikant, author; Notaroš, Branislav, advisor; Ilić, Milan, committee member; Pezeshki, Ali, committee member; Johnson, Thomas, committee memberHigh field and ultra-high field magnetic resonance imaging is the upcoming technology in the field of magnetic resonance imaging. This has created the need for designing of new radio frequency (RF) coils. Here are presented several of these novel RF coils include multifilar helical antenna coils for 3-T, 4.7-T, 7-T and 10.5-T NMR scanners, slotted-waveguide array coils for 7-T, inverted microstrip array coil for 7-T along with other methods to improve the efficiency and homogeneity of the RF field. The coils were simulated using commercial electromagnetic solvers including WIPL-D and ANSYS-HFSS, and some were also measured experimentally. The results for B1+ efficiency are compared with state-of-art coils. These novel coils exhibit high B1+ efficiency, strong right-hand polarization, good field homogeneity with an acceptable level of SAR. Details of numerical methods for the simulations of the coils has also been discussed. Ongoing work and future plans have also been presented.Item Open Access Detection of multiple correlated time series and its application in synthetic aperture sonar imagery(Colorado State University. Libraries, 2014) Klausner, Nicholas Harold, author; Azimi-Sadjadi, Mahmood R., advisor; Scharf, Louis L., advisor; Pezeshki, Ali, committee member; Cooley, Dan, committee memberDetecting the presence of a common but unknown signal among two or more data channels is a problem that finds its uses in many applications, including collaborative sensor networks, geological monitoring of seismic activity, radar, and sonar. Some detection systems in such situations use decision fusion to combine individual detection decisions into one global decision. However, this detection paradigm can be sub-optimal as local decisions are based on the perspective of a single sensory system. Thus, methods that capture the coherent or mutual information among multiple data sets are needed. This work considers the problem of testing for the independence among multiple (≥ 2) random vectors. The solution is attained by considering a Generalized Likelihood Ratio Test (GLRT) that tests the null hypothesis that the composite covariance matrix of the channels, a matrix containing all inter and intra-channel second-order information, is block-diagonal. The test statistic becomes a generalized Hadamard ratio given by the ratio of the determinant of the estimate of this composite covariance matrix over the product of the determinant of its diagonal blocks. One important question in the practical application of any likelihood ratio test is the values of the test statistic needed to achieve sufficient evidence in support of the decision to reject the null hypothesis. To gain some understanding of the false alarm probability or size of the test for the generalized Hadamard ratio, we employ the theory of Gram determinants to show that the likelihood ratio can be written as a product of ratios of the squared residual from two linear prediction problems. This expression for the likelihood ratio leads quite simply to the fact that the generalized Hadamard ratio is stochastically equivalent to a product of independently distributed beta random variables under the null hypothesis. Asymptotically, the scaled logarithm of the generalized Hadamard ratio converges in distribution to a chi-squared random variable as the number of samples used to estimate the composite covariance matrix grows large. The degrees of freedom for this chi-squared distribution are closely related to the dimensions of the parameter spaces considered in the development of the GLRT. Studies of this asymptotic distribution seem to indicate, however, that the rate of convergence is particularly slow for all but the simplest of problems and may therefore lack practicality. For this reason, we consider the use of saddlepoint approximations as a practical alternative for this problem. This leads to methods that can be used to determine the threshold needed to approximately achieve a desired false alarm probability. We next turn our attention to an alternative implementation of the generalized Hadamard ratio for 2-dimensional wide-sense stationary random processes. Although the true GLRT for this problem would impose a Toeplitz structure (more specifically, a Toeplitz-block-Toeplitz structure) on the estimate of the composite covariance matrix, an intractable problem with no closed-form solution, the asymptotic theory of large Toeplitz matrices shows that the generalized Hadamard ratio converges to a broadband coherence statistic as the size of the composite covariance matrix grows large. Although an asymptotic result, simulations of several applications show that even finite dimensional implementations of the broadband coherence statistic can provide a significant improvement in detection performance. This improvement in performance is most likely attributed to the fact that, by constraining the model to incorporate stationarity, we have alleviated some of the difficulties associated with estimating highly parameterized models. Although more generally applicable, the unconstrained covariance estimates used in the generalized Hadamard ratio require the estimation of a much larger number of parameters. These methods are then applied to the detection of underwater targets in pairs of high frequency and broadband sonar images coregistered over the seafloor. This is a difficult problem due to various factors such as variations in the operating and environmental conditions, presence of spatially varying clutter, and variations in target shapes, compositions, and orientation. A comprehensive study of these methods is conducted using three sonar imagery datasets. The first two datasets are actual images of objects lying on the seafloor and are collected at different geographical locations with the environments from each presenting unique challenges. These two datasets will be used to demonstrate the usefulness of results pertaining to the null distribution of the generalized Hadamard ratio and to study the effects different clutter environments can have on its applicability. They are also used to compare the performance of the broadband coherence detector to several alternative detection techniques. The third dataset used in these studies contains actual images of the seafloor with synthetically generated targets of different geometrical shapes inserted into the images. The primary purpose of this dataset is to study the proposed detection technique's robustness to deviations from coregistration which may occur in practice due to the disparities in high frequency and broadband sonar. Using the results of this section, we will show that the fundamental principle of detecting underwater targets using coherence-based approaches is itself a very useful solution for this problem and that the broadband coherence statistic is adequately adept at achieving this.Item Open Access Distributed medium access control for an enhanced physical-link layer interface(Colorado State University. Libraries, 2020) Heydaryanfroshani, Faeze, author; Luo, Rockey, advisor; Yang, Liuqing, committee member; Pezeshki, Ali, committee member; Wang, Haonan, committee memberCurrent wireless network architecture equips data link layer with binary transmission/idling options and gives the control of choosing other communication parameters to the physical layer. Such a network architecture is inefficient in distributed wireless networks where user coordination can be infeasible or expensive in terms of overhead. To address this issue, an enhancement to the physical-link layer interface is proposed. At the physical layer, the enhanced interface is supported by a distributed channel coding theory, which equips each physical layer user with an ensemble of channel codes. The coding theory allows each transmitter to choose an arbitrary code to encode its message without sharing such a decision with the receiver. The receiver, on the other hand, should decode the messages of interest or report collision depending on whether or not a predetermined reliability threshold can be met. Fundamental limits of the system is characterized asymptotically using a "distributed channel capacity'' when the codeword length can be taken to infinity, and non-asymptotically using an achievable performance bound when the codeword length is finite. The focus of this dissertation is to support the enhanced interface at the data link layer. We assume that each link layer user can be equipped with multiple transmission options each corresponds to a coding option at the physical layer. Each user maintains a transmission probability vector whose entries specify the probability at which the user chooses the corresponding transmission options to transmit its packets. We propose a distributed medium access control (MAC) algorithm for a time-slotted multiple access system with/without enhanced physical-link layer interface to adapt the transmission probability vector of each user to a desired equilibrium that maximizes a chosen network utility. The MAC algorithm is applicable to a general channel model and to a wide range of utility functions. The MAC algorithm falls into the stochastic approximation framework with guaranteed convergence under mild conditions. We developed design procedures to satisfy these conditions and to ensure that the system should converge to a unique equilibrium. Simulation results are provided to demonstrate fast and adaptive convergence behavior of the MAC algorithm as well as the near optimal performance of the designed equilibrium. We then extend the distributed MAC algorithm to support hierarchical primary-secondary user structure in a random multiple access system. The hierarchical user structure is established in the following senses. First, when the number of primary users is small, channel availability is kept above a pre-determined threshold regardless of the number of secondary users that are competing for the channel. Second, when the number of primary users is large, transmission probabilities of the secondary users are automatically driven down to zero. Such a hierarchical structure is achieved without the knowledge of the numbers of primary and secondary users and without direct information exchange among the users. Furthermore, we also investigate distributed MAC for a multiple access system with multiple non-interfering channels. We assume that users are homogeneous but the multiple channels can be heterogeneous. In this case, forcing all users to converge to a homogeneous transmission scheme becomes suboptimal. We extend the distributed MAC algorithm to adaptively assign each user to only one channel and to ensure a balanced load across different channels. While theoretical analysis of the extended MAC algorithm is still incomplete, simulation results show that the algorithm can help users to converge to a near optimal channel assignment solution that maximizes a given network utility.