Browsing by Author "Pasricha, Sudeep, committee member"
Now showing 1 - 20 of 40
Results Per Page
Sort Options
Item Open Access Analysis and characterization of wireless smart power meter(Colorado State University. Libraries, 2014) Soman, Sachin, author; Young, Peter, advisor; Zimmerle, Daniel, committee member; Pasricha, Sudeep, committee memberRecent increases in the demand for and price of electricity has stimulated interest in monitoring energy usage and improving efficiency. This research work supports development of a low-cost wireless smart power meter capable of measuring RMS Values of voltage and current, real power, and reactive power. The proposed smart power meter features include matching by-device rate of consumption and usage patterns to assist users in monitoring the connected devices. The meter also includes condition monitoring to detect harmonics of interest in the connected circuits which can give vital clues about the defects in machines connected to the circuits. This research work focuses on estimating communicational and computational requirements of the smart power meter and optimization of the system based on the estimated communication and computational requirements. The wireless communication capabilities investigated here are limited to existing wireless technologies in the environment where the power meters will be deployed. Field tests are performed to measure the performance of selected wireless standard in the deployment environment. The test results are used to understand the distance over which the smart power meters can communicate and where it is necessary to utilize repeaters or range extenders to reduce the data loss. Computational requirements included analysis of smart meter front-end sampling of analog data from both current and voltage sensors. Digitized samples stored in a buffer which is further processed by a microcontroller for all the desired results from the power meter. The various stages for processing the data require computational bandwidth and memory dependent on the size of the data stream and calculations involved in the particular stage. A Simulink-based system model of the power meter was developed to report a statistic of computational bandwidth demanded by each stage of data processing. The developed smart meter works in an environment with other wireless devices which include Wi-Fi and Bluetooth. The data loss caused when the smart power meter transmits the data depends on the architecture of the wireless network and also pre-existing wireless technology working in the same environment and while operating in the same frequency band. The best approach in developing a wireless network should reduce the hardware cost of the network and to reduce the data loss in the wireless network. A wireless sensor network is simulated in OMNET++ platform to measure the performance of wireless standard used in smart power meters. Scenarios involving the number of routers in the network and varying throughput between devices are considered to measure the performance of wireless power meters. Supplementary documents provided with the electronic version of this thesis contain program codes which were developed in Simulink and OMNET++.Item Open Access Applications of inertial measurement units in monitoring rehabilitation progress of arm in stroke survivors(Colorado State University. Libraries, 2011) Doshi, Saket Sham, author; Jayasumana, Anura P., advisor; Malcolm, Matthew P., committee member; Pasricha, Sudeep, committee member; Malaiya, Yashwant K., committee memberConstraint Induced Movement Therapy (CIMT) has been clinically proven to be effective in restoring functional abilities of the affected arm among stroke survivors. Current CIMT delivery method lacks a robust technique to monitor rehabilitation progress, which results in increasing costs of stroke related health care. Recent advances in the design and manufacturing of Micro Electro Mechanical System (MEMS) inertial sensors have enabled tracking human motions reliably and accurately. This thesis presents three algorithms that enable monitoring of arm movements during CIMT by means of MEMS inertial sensors. The first algorithm quantifies the affected arm usage during CIMT. This algorithm filters the arm movement data, sampled during activities of daily life (ADL), by applying a threshold to determine the duration of affected arm movements. When an activity is performed multiple times, this algorithm counts the number of repetitions performed. Current technique uses a touch/proximity sensor and a motor activity log maintained by the patient to determine CIMT duration. Affected arm motion is a direct indicator of CIMT session and hence this algorithm tracks rehabilitation progress more accurately. Actual patients' affected arm movement data analysis shows that the algorithm does activity detection with an average accuracy of >90%. Second of the three algorithms, tracking stroke rehabilitation of affected arm through histogram of distance traversed, evaluates an objective metric to assess rehabilitation progress. The objective metric can be used to compare different stroke patients based on their functional ability in affected arm. The algorithm calculates the histogram by evaluating distances traversed over a fixed duration window. The impact of this window on algorithm's performance is analyzed. The algorithm has better temporal resolution when compared with another standard objective test, box and block test (BBT). The algorithm calculates linearly weighted area under the histogram as a score to rank various patients as per their rehabilitation progress. The algorithm has better performance for patients with chronic stroke and certain degree of functional ability. Lastly, Kalman filter based motion tracking algorithm is presented that tracks linear motions in 2D, such that only one axis can experience motion at any given time. The algorithm has high (>95%) accuracy. Data representing linear human arm motion along a single axis is generated to analyze and determine optimal parameters of Kalman filter. Cross-axis sensitivity of the accelerometer limits the performance of the algorithm over longer durations. A method to identify the 1D components of 2D motion is developed and cross-axis effects are removed to improve the performance of motion tracking algorithm.Item Open Access Assessing the immediate impact of a movement tracking-based intervention for unilateral spatial neglect experienced by stroke survivors(Colorado State University. Libraries, 2015) McFarland, Roxie, author; Malcolm, Matt, advisor; Greene, David, committee member; Pasricha, Sudeep, committee memberTo view the abstract, please see the full text of the document.Item Open Access Automating the derivation of memory allocations for acceleration of polyhedral programs(Colorado State University. Libraries, 2024) Ferry, Corentin, author; Rajopadhye, Sanjay, advisor; Derrien, Steven, advisor; Wilson, Jesse, committee member; Pasricha, Sudeep, committee member; McClurg, Jedidiah, committee member; Sadayappan, Ponnuswamy, committee member; de Dinechin, Florent, committee member; Collange, Caroline, committee memberAs processors compute power keeps increasing, so do their demands in memory accesses: some computations will require a higher bandwidth and exhibit regular memory access patterns, others will require a lower access latency and exhibit random access patterns. To cope with all demands, memory technologies are becoming diverse. It is then necessary to adapt both programs and hardware accelerators to the memory technology they use. Notably, memory access patterns and memory layouts have to be optimized. Manual optimization can be extremely tedious and does not scale to a large number of processors and memories, where automation becomes necessary. In this Ph.D dissertation, we suggest several automated methods to derive data layouts from programs, notably for FPGA accelerators. We focus on getting the best throughput from high-latency, high-bandwidth memories and, for all kinds of memories, the lowest redundancy while preserving contiguity. To this effect, we introduce mathematical analyses to partition the data flow of a program with uniform and affine dependence patterns, propose memory layouts and automation techniques to get optimized FPGA accelerators.Item Open Access Biologically inspired perching for aerial robots(Colorado State University. Libraries, 2021) Zhang, Haijie, author; Zhao, Jianguo, advisor; Bradley, Thomas H., committee member; Pasricha, Sudeep, committee member; Guzik, Stephen, committee memberMicro Aerial Vehicles (MAVs) are widely used for various civilian and military applications (e.g., surveillance, search, and monitoring, etc.); however, one critical problem they are facing is the limited airborne time (less than one hour) due to the low aerodynamic efficiency, low energy storage capability, and high energy consumption. To address this problem, mimicking biological flyers to perch onto objects (e.g., walls, power lines, or ceilings) will significantly extend MAVs' functioning time for surveillance or monitoring related tasks. Successful perching for aerial robots, however, is quite challenging as it requires a synergistic integration of mechanical and computational intelligence. Mechanical intelligence means mechanical mechanisms to passively damp out the impact between the robot and the perching object and robustly engage the robot to the perching objects. Computational intelligence means computation algorithms to estimate, plan, and control the robot's motion so that the robot can progressively reduce its speed and adjust its orientation to perch on the objects with a desired velocity and orientation. In this research, a framework for biologically inspired perching is investigated, focusing on both computational and mechanical intelligence. Computational intelligence includes vision-based state estimation and trajectory planning. Unlike traditional flight states such as position and velocity, we leverage a biologically inspired state called time-to-contact (TTC) that represents the remaining time to the perching object at the current flight velocity. A faster and more accurate estimation method based on consecutive images is proposed to estimate TTC. Then a trajectory is planned in TTC space to realize the faster perching while satisfying multiple flight and perching constraints, e.g., maximum velocity, maximum acceleration, and desired contact velocity. For mechanical intelligence, we design, develop, and analyze a novel compliant bistable gripper with two stable states. When the gripper is open, it can close passively by the contact force between the robot and the perching object, eliminating additional actuators or sensors. We also analyze the bistability of the gripper to guide and optimize the design of the gripper. At the end, a customized MAV platform of size 250 mm is designed to combine computational and mechanical intelligence. A Raspberry Pi is used as the onboard computer to do vision-based state estimation and control. Besides, a larger gripper is designed to make the MAV perch on a horizontal rod. Perching experiments using the designed trajectories perform well at activating the bistable gripper to perch while avoiding large impact force which may damage the gripper and the MAV. The research will enable robust perching of MAVs so that they can maintain a desired observation or resting position for long-duration inspection, surveillance, search, and rescue.Item Open Access Capture and reconstruction of the topology of undirected graphs from partial coordinates: a matrix completion based approach(Colorado State University. Libraries, 2017) Ramasamy, Sridhar, author; Jayasumana, Anura, advisor; Paffenroth, Randy, committee member; Ray, Indrajit, committee member; Pasricha, Sudeep, committee memberWith the advancement in science and technology, new types of complex networks have become common place across varied domains such as computer networks, Internet, bio-technological studies, sociology, and condensed matter physics. The surge of interest in research towards graphs and topology can be attributed to important applications such as graph representation of words in computational linguistics, identification of terrorists for national security, studying complicated atomic structures, and modeling connectivity in condensed matter physics. Well-known social networks, Facebook, and twitter, have millions of users, while the science citation index is a repository of millions of records and citations. These examples indicate the importance of efficient techniques for measuring, characterizing and mining large and complex networks. Often analysis of graph attributes to understand the graph topology and embedded properties on these complex graphs becomes difficult due to causes such need to process huge data volumes, lack of compressed representation forms and lack of complete information. Due to improper or inadequate acquiring processes, inaccessibility, etc., often we end up with partial graph representational data. Thus there is immense significance in being able to extract this missing information from the available data. Therefore obtaining the topology of a graph, such as a communication network or a social network from incomplete information is our research focus. Specifically, this research addresses the problem of capturing and reconstructing the topology of a network from a small set of path length measurements. An accurate solution for this problem also provides means of describing graphs with a compressed representation. A technique to obtain the topology from only a partial set of information about network paths is presented. Specifically, we demonstrate the capture of the network topology from a small set of measurements corresponding to a) shortest hop distances of nodes with respect to small set of nodes called as anchors, or b) a set of pairwise hop distances between random node pairs. These two measurement sets can be related to the Distance matrix D, a common representation of the topology, where an entry contains the shortest hop distance between two nodes. In an anchor based method, the shortest hop distances of nodes to a set of M anchors constitute what is known as a Virtual Coordinate (VC) matrix. This is a submatrix of columns of D corresponding to the anchor nodes. Random pairwise measurements correspond to a random subset of elements of D. The proposed technique depends on a low rank matrix completion method based on extended Robust Principal Component Analysis to extract the unknown elements. The application of the principles of matrix completion relies on the conjecture that many natural data sets are inherently low dimensional and thus corresponding matrix is relatively low ranked. We demonstrate that this is applicable to D of many large-scale networks as well. Thus we are able to use results from the theory of matrix completion for capturing the topology. Two important types of graphs have been used for evaluation of the proposed technique, namely, Wireless Sensor Network (WSN) graphs and social network graphs. For WSN examples, we use the Topology Preserving Map (TPM), which is a homeomorphic representation of the original layout, to evaluate the effectiveness of the technique from partial sets of entries of VC matrix. A double centering based approach is used to evaluate the TPMs from VCs, in comparison with the existing non-centered approach. Results are presented for both random anchors and nodes that are farthest apart on the boundaries. The idea of obtaining topology is extended towards social network link prediction. The significance of this result lies in the fact that with increasing privacy concerns, obtaining the data in the form of VC matrix or as hop distance matrix becomes difficult. This approach of predicting the unknown entries of a matrix provides a novel approach for social network link predictions, and is supported by the fact that the distance matrices of most real world networks are naturally low ranked. The accuracy of the proposed techniques is evaluated using 4 different WSN and 3 different social networks. Two 2D and two 3D networks have been used for WSNs with the number of nodes ranging from 500 to 1600. We are able to obtain accurate TPMs for both random anchors and extreme anchors with only 20% to 40% of VC matrix entries. The mean error quantifies the error introduced in TPMs due to unknown entries. The results indicate that even with 80% of entries missing, the mean error is around 35% to 45%. The Facebook, Collaboration and Enron Email sub networks, with 744, 4158, 3892 nodes respectively, have been used for social network capture. The results obtained are very promising. With 80% of information missing in the hop-distance matrix, a maximum error of only around 6% is incurred. The error in prediction of hop distance is less than 0.5 hops. This has also opened up the idea of compressed representation of networks by its VC matrix.Item Open Access Comprehensive concept-phase system safety analysis for hybrid-electric vehicles utilizing automated driving functions(Colorado State University. Libraries, 2019) Knopf, Matthew David, author; Bradley, Thomas, advisor; Olsen, Daniel, committee member; Pasricha, Sudeep, committee memberAutomotive system safety (SS) analysis involving automated driving functions (ADFs) and advanced driver assistance systems (ADAS) is an active subject of research but highly proprietary. A comprehensive SS analysis and a risk informed safety case (RISC) is required for all complex hybrid-vehicle builds especially when utilizing ADFs and ADAS. Industry standard SS procedures have been developed and are accessible but contain few detailed instructions or references for the process of completing a thorough automotive SS analysis. In this work, a comprehensive SS analysis is performed on an SAE-Level 2 autonomous hybrid-vehicle architecture in the concept phase which utilizes lateral and longitudinal automated corrective control actions. This paper first outlines a proposed SS process including a cross-functional SS working group procedure, followed by the development of an item definition inclusive of the ADFs and ADAS and an examination of 5 hazard analysis and risk assessment (HARA) techniques common to the automotive industry that were applied to 11 vehicle systems, and finally elicits the safety goals and functional requirements necessary for safe vehicle operation. The results detail functional failures, causes, effects, prevention, and mitigation methods as well as the utility of, and instruction for completing the various HARA techniques. The conclusion shows the resulting critical safety concerns for an SAE Level-2 autonomous system can be reduced through the use of the developed list of 116 safety goals and 950 functional safety requirements.Item Open Access Dancing the two step: a phenomenological qualitative study on stroke survivors' experiences using an augmented reality system(Colorado State University. Libraries, 2015) Gisetti, Alexandra, author; Sample, Pat L., advisor; Malcolm, Matt, committee member; Pasricha, Sudeep, committee memberIntroduction: Having a stroke can be a very debilitating experience causing hemiparesis or hemiplegia. Often, when individuals are discharged home, therapeutic support decreases. This provides rehabilitation specialists with an opportunity to create an at-home, remotely-monitored therapeutic tool. Augmented reality (AR) provides a medium to meet this opportunity. The purpose of this study is to understand stroke survivors’ overall experience using AR technology as a remotely-monitored, home-based therapy program, so that rehabilitation professionals can gain a clearer view of its impact and impression on survivors’ day-to-day lives. Methods: This study incorporated a phenomenological qualitative approach, where two participants were trained on an AR system called Gator Games, and were interviewed three times over a month’s time to ascertain their lived experiences using such a system. The interviews were transcribed, coded and analyzed. Results: The following themes were identified: (1) No time to be impaired, (2) Perseverance, (3) Hope: Still trying new therapies in hopes of getting better, and (4) Having a Primary Hobby: A way to see me improve and get better. These results were confirmed through triangulating analysts, peer debrief, and member checking. Discussion: Due to technological difficulties with Gator Games, the AR system was minimally part of the participants’ daily lives, rather than being a large part of their lives. The focus of these individuals was more on their role as a family member, persevering through their symptoms and participating in a passionate hobby. Conclusion: There is a potential for this technology to be used as a remotely-monitored, at-home therapeutic tool, however, for the games to be considered more engaging, they need to be customized according to participant feedback and potentially include more mentally stimulating games versus games that focus on physical capabilities.Item Open Access Design methodology and productivity improvement in high speed VLSI circuits(Colorado State University. Libraries, 2017) Hossain, KM Mozammel, author; Chen, Thomas W., advisor; Malaiya, Yashwant, committee member; Pasricha, Sudeep, committee member; Pezeshki, Ali, committee memberTo view the abstract, please see the full text of the document.Item Open Access Design of a multi-sensor platform for integrating extracellular acidification rate with multi-metabolite flux measurement for small biological samples(Colorado State University. Libraries, 2019) Obeidat, Yusra M., author; Chen, Tom, advisor; Pasricha, Sudeep, committee member; Collins, George, committee member; Tobet, Stuart, committee memberTo view the abstract, please see the full text of the document.Item Open Access Dynamic resource management in heterogeneous systems: maximizing utility, value, and energy-efficiency(Colorado State University. Libraries, 2021) Machovec, Dylan, author; Siegel, H. J., advisor; Maciejewski, Anthony A., committee member; Pasricha, Sudeep, committee member; Burns, Patrick, committee memberThe need for high performance computing (HPC) resources is rapidly expanding throughout many technical fields, but there are finite resources available to meet this demand. To address this, it is important to effectively manage these resources to ensure that as much useful work as possible is completed. In this research, HPC systems executing parallel jobs are considered with and without energy constraints. Additionally, the case where preemption is available is considered for HPC systems executing only serial jobs. Dynamic resource management techniques are designed, evaluated, and compared in heterogeneous environments to assign jobs to HPC nodes. These techniques are evaluated based on system-wide performance measures (value or utility), which quantify the amount of useful work accomplished by the HPC system. Near real-time heuristics are designed to optimize performance in specific environments and the best performing techniques are combined using intelligent metaheuristics that dynamically switch between heuristics based on the characteristics of the current environment. Resource management techniques also are designed for the assignment of unmanned aerial vehicles (UAVs) to surveil targets, where performance is characterized by a value-based performance measure and each UAV is constrained in its total energy consumption.Item Open Access Efficient input space exploration for falsification of cyber-physical systems(Colorado State University. Libraries, 2022) Savaliya, Meetkumar, author; Prabhu, Vinayak, advisor; Pasricha, Sudeep, committee member; Ghosh, Sudipto, committee memberIn recent years black-box optimization based search testing for Signal Temporal Logic (STL)specifications has been shown to be a promising approach for finding bugs in complex Cyber Physical Systems (CPS) that are out of reach of formal analysis tools. The efficacy of this approach depends on efficiently exploring the input space, which for CPS is infinite. Inputs for CPS are defined as functions from some time domain to the domain of signal values. Typically, in black-box based testing, input signals are constructed from a small set of parameters, and the optimizer searches over this set of parameters to get a falsifying input. In this work we propose a heuristic that uses the step response of the system – a standard system characteristic from Control Engineering – to obtain a smaller time interval in which the optimizer needs to vary the inputs, enabling the use of a smaller set of parameters over which the optimizer needs to search over. We evaluate the heuristic on three complex Simulink model benchmarks from the CPS falsification community, and we demonstrate the efficacy of our approach.Item Open Access Extending and validating the stencil processing unit(Colorado State University. Libraries, 2016) Rajasree, Revathy, author; Rajopadhye, Sanjay, advisor; Pasricha, Sudeep, committee member; Anderson, Charles W., committee memberStencils are an important class of programs that appear in the core of many scientific and general-purpose applications. These compute-intensive kernels can benefit heavily from the massive compute power of accelerators like the GPGPU. However, due to the absence of any form of on-chip communication between the coarse-grain processors on a GPU, any data transfer/synchronization between the dependent tiles in stencil computations has to happen through the off-chip (global) memory, which is quite energy-expensive. In the road to exascale computing, energy is becoming an important cost metric. The need for hardware and software that can collaboratively work towards reducing energy consumption of a system is becoming more and more important. To make the execution of dense stencils more energy efficient, Rajopadhye et al. proposed the GPGPU-based accelerator called Stencil Processing Unit that introduces a simple neighbor-to-neighbor communication between the Streaming Multiprocessors (SM) on the GPU, thereby allowing some restricted data sharing between consecutive threadblocks. The SPU includes special storage units, called Communication Buffers, to orchestrate this data transfer and also provides an explicit mechanism for inter-threadblock synchronization by way of a special instruction. It claims to achieve energy-efficiency, compared to GPUs, by reducing the number of off-chip accesses in stencils which in turn reduces the dynamic energy overhead. Uguen developed a cycle-accurate performance simulator for the SPU, called SPU-Sim, and evaluated it using a matrix multiplication kernel which was not suitable for this accelerator. This work focuses on extending the SPU-Sim and evaluating the SPU architecture using a more insightful benchmark. We introduce a producer-consumer based inter-block synchronization approach on the SPU, which is more efficient than the previous global synchronization, and an overlapped multi-pass execution model in the SPU runtime system. These optimizations have been implemented into SPU-Sim. Furthermore, the existing GPUWattch power model in the simulator has been refined to provide better power estimates for the SPU architecture. The improved architecture has been evaluated using a simple 2-D stencil benchmark and we observe an average of 16% savings in dynamic energy on SPU compared to a fairly close GPU platform. Nonetheless, the total energy consumption on SPU is still comparatively high due to the static energy component. This high static energy on SPU is a direct impact of the increased leakage power of the platform resulting from the inclusion of special load/store units. Our conservative estimates indicate that replacing the current design of these L/S units with DMA engines can bring about a 15% decrease in the current leakage power of the SPU and this can help SPU outperform GPU in terms of energy.Item Open Access Hardware implementation and design space exploration for Wave 2D and Jacobi 2D stencil computations(Colorado State University. Libraries, 2017) Chandramohan, Rajbharath, author; Rajopadhye, Sanjay, advisor; Pinaud, Oliver, committee member; Pasricha, Sudeep, committee memberHardware accelerators are highly optimized functional blocks designed to perform specific tasks from the CPU at a higher performance. We developed a hardware accelerator for Jacobi 2D and Wave 2D algorithms, two computations with a stencil pattern. They are used in a lot of scientific applications in the field of acoustics, electro magnetics and Fluid dynamics. These problems have large problem sizes, memory limitations and bandwidth constraints that result in long run times on large problems. Hence, an approach which increases the performance of these problems that reduces bandwidth requirement is necessary. We developed analytical models depicting the performance, Bandwidth and Area models for the Wave 2D algorithm and Jacobi 2D algorithm and solved them for the optimal solution using posynomials and positivity property in MATLAB and using Excel Solver. We split the computation into two levels of tiling. The first level called passes is a rectangular prism that runs through the 3-D iteration space. Each pass is mapped to a grid of processing elements(PEs) in the hardware accelerator. The second level of tiling splits the vertical prism into smaller prisms executed by a single PE. These optimizations are implemented in Verilog using Altera Quartus and simulated using ModelSIM. Results from ModelSIM provides an accurate model and an experimental verification of the design. We also achieved improved performance and lower bandwidth.Item Open Access Heterogeneous computing environment characterization and thermal-aware scheduling strategies to optimize data center power consumption(Colorado State University. Libraries, 2012) Al-Qawasmeh, Abdulla, author; Siegel, H. J., advisor; Maciejewski, Anthony A., advisor; Pasricha, Sudeep, committee member; Wang, Haonan, committee memberMany computing systems are heterogeneous both in terms of the performance of their machines and in terms of the characteristics and computational complexity of the tasks that execute on them. Furthermore, different tasks are better suited to execute on specific types of machines. Optimally mapping tasks to machines in a heterogeneous system is, in general, an NP-complete problem. In most cases, heuristics are used to find near-optimal mappings. The performance of allocation heuristics can be affected significantly by factors such as task and machine heterogeneities. In this thesis, different measures are identified to be used in quantifying the heterogeneity of HC systems and the correlation between the performance of the heuristics and these measures is shown. The power consumption of data centers has been increasing at a rapid rate over the past few years. Motivated by the need to reduce the power consumption of data centers, many researchers have been investigating methods to increase the energy efficiency in computing at different levels: chip, server, rack, and data center. Many of today's data centers experience physical limitations on the power needed to run the data center. The first problem that is studied in this thesis is maximizing the performance of a data center that is subject to total power consumption and thermal constraints. A power model for a data center that includes power consumed in both Computer Room Air Conditioning (CRAC) units and compute nodes is considered. The approach in this thesis quantifies the performance of the data center as the total reward collected from completing tasks in a workload by their individual deadlines. The second problem that is studied in this research is how to minimize the power consumption in a data center while guaranteeing that the overall performance does not drop below a specified threshold. For both problems, novel optimization techniques for assigning the performance states of compute cores at the data center level to optimize the operation of the data center are developed. The assignment techniques are divided into two stages. The first stage assigns the P-states of cores, the desired number of tasks per unit time allocated to a core, and the outlet CRAC temperatures. The second stage assigns individual tasks as they arrive at the data center to cores so that the actual number of tasks per unit time allocated to a core approaches the desired number set by the first stage.Item Open Access Impact of resequencing buffer distribution on packet reordering(Colorado State University. Libraries, 2011) Mandyam Narasiodeyar, Raghunandan, author; Jayasumana, Anura P., advisor; Malaiya, Yashwant K., committee member; Pasricha, Sudeep, committee memberPacket reordering in Internet has become an unavoidable phenomenon wherein packets get displaced during transmission resulting in out of order packets at the destination. Resequencing buffers are used at the end nodes to recover from packet reordering. This thesis presents analytical estimation methods for "Reorder Density" (RD) and "Reorder Buffer occupancy Density" (RBD) that are metrics of packet reordering, of packet sequences as they traverse through resequencing nodes with limited buffers. During the analysis, a "Lowest First Resequencing Algorithm" is defined and used in individual nodes to resequence packets back into order. The results are obtained by studying the patterns of sequences as they traverse through resequencing nodes. The estimations of RD and RBD are found to vary for sequences containing different types of packet reordering patterns such as Independent Reordering, Embedded Reordering and Overlapped Reordering. Therefore, multiple estimations in the form of theorems catering to different reordering patterns are presented. The proposed estimation models assist in the allocation of resources across intermediate network elements to mitigate the effect of packet reordering. Theorems to derive RBD from RD when only RD is available are also presented. Just like the resequencing estimation models, effective RBD for a given RD are also found to vary for different packet reordering patterns, therefore, multiple theorems catering to different patterns are presented. Such RBD estimations would be useful for allocating resources based on certain QoS criteria wherein one of the metrics is RD. Simulations driven by Internet measurement traces and random sequences are used to verify the analytical results. Since high degree of packet reordering is known to affect the quality of applications using TCP and UDP on the Internet, this study has broad applicability in the area of mobile communication and networks.Item Open Access Implementation and evaluation of backward facing fuel consumption simulation and testing methods(Colorado State University. Libraries, 2019) Johnson, Troy, author; Bradley, Thomas, advisor; Pasricha, Sudeep, committee member; Weinberger, Chris, committee memberThe Colorado State University Vehicle Innovations Team (VIT) participates in numerous Advanced Vehicle Technology Competitions (AVTC's) as well as several hybrid-electric vehicle projects with outside sponsors. This study seeks to develop and quantify the accuracy of simulation and testing methods that will be used in the VIT's predictive optimal energy management strategy research that is to be used in these projects. First, a backward facing vehicle simulation model is built and populated with real-world OBD-II drive data collected from a 2019 Toyota Tacoma. This includes the creation of both an engine speed vs accelerator position vs engine load map as well as an engine speed vs engine load vs engine fuel rate map. Acceleration events (AE's) are performed with a baseline shift schedule and vehicle performance is recorded. The backward facing vehicle simulation model is used to predict how a modified shift schedule will affect the vehicle's fuel consumption. Further AE's are performed with the modified shift schedule and the performance data is compared to the vehicle simulation. The backward facing simulation model was capable of predicting average engine speed within 0.3 RPM, average engine load within 5.2%, and average total fuel consumption within 0.2 grams of the actual testing data. This study concludes that the vehicle simulation methods are capable of predicting fuel consumption changes within 1.4% of what is actual measured during real-world testing with a 95% confidence.Item Open Access In-vehicle validation of energy consumption modeling and simulation(Colorado State University. Libraries, 2020) DiDomenico, Gabriel, author; Bradley, Thomas, advisor; Quinn, Jason, committee member; Pasricha, Sudeep, committee memberThe Colorado State University (CSU) Vehicle Innovation Team (VIT) participated in the first Department of Energy (DOE) Advanced Vehicle Technology Competitions (AVTC) in 1988. Since then, it has participated in the next iterations of the competition as well as other advanced vehicle technology projects. This study aims to validate the team's mathematical modeling and simulation of electrical energy consumption of the EcoCAR 3 competition (academic years 2014-2018) as well as the testing methods used for validation. First, baseline simulation results are obtained by simulating a 0-60 mph wide open throttle (WOT, or 100% APP) acceleration event (AE) with the product being the electrical energy economy in Wh/mi. The baseline model (representing the baseline control strategy and vehicle parameters) is also simulated for 0-40 mph and 0-20 mph AEs. These tests are replicated in the actual vehicle, a 2016 P2 PHEV Chevrolet Camaro entirely designed and built by CSU's VIT. Next, the same AEs are again tested with a changed acceleration rate due to the APP being limited to 45%. The velocity profiles from these tests are used as feedback for the model and the tests are replicated in simulation. Finally, the baseline model is altered in 3 additional ways in order to understand their effect on electrical energy consumption: the mass is increased, then the auxiliary low voltage (LV) load is increased and then the transmission is restricted to only 1 gear. These simulations are again replicated in-vehicle in order to validate the model's capability in predicting changes in electrical energy consumption as certain vehicle parameters are changed. This study concludes that model is able to predict these changes within 6.5%, or ±30.2 Wh/mi with 95% confidence.Item Open Access Localized anomaly detection via hierarchical integrated activity discovery(Colorado State University. Libraries, 2014) Chockalingam, Thiyagarajan, author; Rajopadhye, Sanjay, advisor; Anderson, Chuck, advisor; Pasricha, Sudeep, committee member; Bohm, Wim, committee memberWith the increasing number and variety of camera installations, unsupervised methods that learn typical activities have become popular for anomaly detection. In this thesis, we consider recent methods based on temporal probabilistic models and improve them in multiple ways. Our contributions are the following: (i) we integrate the low level processing and the temporal activity modeling, showing how this feedback improves the overall quality of the captured information, (ii) we show how the same approach can be taken to do hierarchical multi-camera processing, (iii) we use spatial analysis of the anomalies both to perform local anomaly detection and to frame automatically the detected anomalies. We illustrate the approach on both traffic data and videos coming from a metro station. We also investigate the application of topic models in Brain Computing Interfaces for Mental Task classification. We observe a classification accuracy of up to 68% for four Mental Tasks on individual subjects.Item Open Access Low-cost embedded systems for community-driven ambient air quality monitoring(Colorado State University. Libraries, 2022) Wendt, Eric, author; Volckens, John, advisor; Pierce, Jeffrey, committee member; Jathar, Shantanu, committee member; Pasricha, Sudeep, committee memberFine particulate matter (PM2.5) air pollution is a leading cause of death, disease and environmental degradation worldwide. Existing PM2.5 measurement infrastructure provides broad PM2.5 sampling coverage, but due to high costs (>10,000 USD), these instruments are rarely broadly distributed at community-level scales. Low-cost sensors can be more practically deployed in spatial and temporal configurations that can fill the gaps left by more expensive monitors. Crowdsourcing low-cost sensors is a promising deployment strategy in which sensors are operated by interested community members. Prior work has demonstrated the potential of crowdsourced networks, but low-cost sensor technology remains ripe for improvement. Here we describe a body of work aimed toward bolstering the future of community-driven air quality monitoring through technological innovation. We first detail the development of the Aerosol Mass and Optical Depth (AMODv2) sampler, a low-cost monitor capable of unsupervised measurement of PM2.5 mass concentration and Aerosol Optical Depth (AOD), a measure of light extinction in the full atmospheric column due to airborne particles. We highlight key design features of the AMODv2 and demonstrate that its measurements are accurate relative to standard reference monitors. Second we describe a national crowdsourced network of AMODv2s, in which we leveraged the measurement capabilities of the AMODv2 in a network of university students to analyze the relationship between PM2.5 and AOD in the presence of wildfire smoke in the United States. Finally, we propose a cloud screening algorithm for AOD measurements using all-sky images and deep transfer learning. We found that our algorithm correctly screens over 95% of all-sky images for cloud contamination from a custom all-sky image data set. Taken as a whole, our work supports community-driven air pollution monitoring by advancing the tools and strategies communities need to better understand the air they breathe.