Browsing by Author "Hayne, Stephen, committee member"
Now showing 1 - 19 of 19
Results Per Page
Sort Options
Item Open Access Access control for IoT environments: specification and analysis(Colorado State University. Libraries, 2021) Peterson, Jordan T., author; Ray, Indrakshi, advisor; Prabhu, Vinayak, advisor; Gersch, Joseph, committee member; Hayne, Stephen, committee memberSmart homes have devices which are prone to attacks as seen in the 2016 Mirai botnet attacks. Authentication and access control form the first line of defense. Towards this end, we propose an attribute-based access control framework for smart homes that is inspired by the Next Generation Access Control (NGAC) model. Policies in a smart home can be complex. Towards this end, we demonstrate how the formal modeling language Alloy can be used for policy analysis. In this work we formally define an IoT environment, express an example security policy in the context of a smart home, and show the policy analysis using Alloy. This work introduces processes for identifying conflicting and redundant rules with respect to a given policy. This work also demonstrates a practical use case for the processes described. In other words, this work formalizes policy rule definition, home IoT environment definition, and rule analysis all in the context of NGAC and Alloy.Item Open Access Access control models for pervasive computing environments(Colorado State University. Libraries, 2010) Toahchoodee, Manachai, author; Ray, Indrakshi, advisor; McConnell, Ross M., committee member; Ray, Indrajit, 1966-, committee member; Hayne, Stephen, committee memberWith the growing advancement of pervasive computing technologies, we are moving towards an era where context information will be necessary for access control. Traditional access control models like Mandatory Access Control (MAC), Discretionary Access Control (DAC), and Role-Based Access Control (RBAC) do not work well in this scenario for several reasons. First, unlike traditional applications, pervasive computing applications usually do not have well-defined security perimeter-the entities an application will interact with or the resources that will be accessed may not be known in advance. Second, these applications are also dynamic in nature--the accessing entities may change, resources requiring protection may be created or modified, and an entity's access to resources may change during the course of the application, which make the resources protection during application execution extremely challenging. Third, pervasive computing applications use the knowledge of surrounding physical spaces to provide services; security policies designed for such applications must therefore use contextual information. Thus, new access control models and technologies are needed for pervasive computing applications. In this dissertation, we propose two types of access control models for pervasive computing environments; one determine the accessibility based on the spatio-temporal constraints, and the other determine the accessibility based on the trustworthiness of the entities. The different features of access control models may interact in subtle ways resulting in conflicts. Consequently, it is important to analyze and understand these models before they are widely deployed. The other contribution of this dissertation is to verify the correctness of the model. The results obtained by analyzing the access control models will enable the users of the model to make informed decisions. Toward this end, we propose automated verification techniques for our access control models.Item Open Access Analysis of structured data and big data with application to neuroscience(Colorado State University. Libraries, 2015) Sienkiewicz, Ela, author; Wang, Haonan, advisor; Meyer, Mary, committee member; Breidt, F. Jay, committee member; Hayne, Stephen, committee memberNeuroscience research leads to a remarkable set of statistical challenges, many of them due to the complexity of the brain, its intricate structure and dynamical, non-linear, often non-stationary behavior. The challenge of modeling brain functions is magnified by the quantity and inhomogeneity of data produced by scientific studies. Here we show how to take advantage of advances in distributed and parallel computing to mitigate memory and processor constraints and develop models of neural components and neural dynamics. First we consider the problem of function estimation and selection in time-series functional dynamical models. Our motivating application is on the point-process spiking activities recorded from the brain, which poses major computational challenges for modeling even moderately complex brain functionality. We present a big data approach to the identification of sparse nonlinear dynamical systems using generalized Volterra kernels and their approximation using B-spline basis functions. The performance of the proposed method is demonstrated in experimental studies. We also consider a set of unlabeled tree objects with topological and geometric properties. For each data object, two curve representations are developed to characterize its topological and geometric aspects. We further define the notions of topological and geometric medians as well as quantiles based on both representations. In addition, we take a novel approach to define the Pareto medians and quantiles through a multi-objective optimization problem. In particular, we study two different objective functions which measure the topological variation and geometric variation respectively. Analytical solutions are provided for topological and geometric medians and quantiles, and in general, for Pareto medians and quantiles the genetic algorithm is implemented. The proposed methods are applied to analyze a data set of pyramidal neurons.Item Open Access Automatic endpoint vulnerability detection of Linux and open source using the National Vulnerability Database(Colorado State University. Libraries, 2008) Whyman, Paul Arthur, author; Ray, Indrajit, advisor; Krawetz, Neal, committee member; Whitley, L. Darrell, committee member; Hayne, Stephen, committee memberA means to reduce security risks to a network of computers is to manage which computers can participate on a network, and control the participation of systems that do not conform to the security policy. Requiring systems to demonstrate their compliance to the policy can limit the risk of allowing uncompiling systems access to trusted networks. One aspect of determining the risk a system represents is patch-level, a comparison between the availability of vendor security patches and their application on a system. A fully updated system has all available patches applied. Using patch level as a security policy metric, systems can evaluate as compliant, yet may still contain known vulnerabilities, representing real risks of exploitation. An alternative approach is a direct comparison of system software to public vulnerability reports contained in the National Vulnerability Database (NVD). This approach may produce a more accurate assessment of system risk for several reasons including removing the delay caused by vendor patch development and by analyzing system risk using vender-independent vulnerability information. This work demonstrates empirically that current, fully patched systems contain numerous software vulnerabilities. This technique can apply to platforms other than those of Open Source origin. This alternative method, which compares system software components to lists of known software vulnerabilities, must reliably match system components to those listed as vulnerable. This match requires a precise identification of both the vulnerability and the software that the vulnerability affects. In the process of this analysis, significant issues arose within the NVD pertaining to the presentation of Open Source vulnerability information. Direct matching is not possible using the current information in the NVD. Furthermore, these issues support the belief that the NVD is not an accurate data source for popular statistical comparisons between closed and open source software.Item Open Access Behavioral complexity analysis of networked systems to identify malware attacks(Colorado State University. Libraries, 2020) Haefner, Kyle, author; Ray, Indrakshi, advisor; Ben-Hur, Asa, committee member; Gersch, Joe, committee member; Hayne, Stephen, committee member; Ray, Indrajit, committee memberInternet of Things (IoT) environments are often composed of a diverse set of devices that span a broad range of functionality, making them a challenge to secure. This diversity of function leads to a commensurate diversity in network traffic, some devices have simple network footprints and some devices have complex network footprints. This network-complexity in a device's traffic provides a differentiator that can be used by the network to distinguish which devices are most effectively managed autonomously and which devices are not. This study proposes an informed autonomous learning method by quantifying the complexity of a device based on historic traffic and applies this complexity metric to build a probabilistic model of the device's normal behavior using a Gaussian Mixture Model (GMM). This method results in an anomaly detection classifier with inlier probability thresholds customized to the complexity of each device without requiring labeled data. The model efficacy is then evaluated using seven common types of real malware traffic and across four device datasets of network traffic: one residential-based, two from labs, and one consisting of commercial automation devices. The results of the analysis of over 100 devices and 800 experiments show that the model leads to highly accurate representations of the devices and a strong correlation between the measured complexity of a device and the accuracy to which its network behavior can be modeled.Item Open Access Characterizing the visible address space to enable efficient continuous IP geolocation(Colorado State University. Libraries, 2020) Gharaibeh, Manaf, author; Papadopoulos, Christos, advisor; Partridge, Craig, advisor; Heidemann, John, committee member; Ray, Indrakshi, committee member; Hayne, Stephen, committee memberInternet Protocol (IP) geolocation is vital for location-dependent applications and many network research problems. The benefits to applications include enabling content customization, proximal server selection, and management of digital rights based on the location of users, to name a few. The benefits to networking research include providing geographic context useful for several purposes, such as to study the geographic deployment of Internet resources, bind cloud data to a location, and to study censorship and monitoring, among others. The measurement-based IP geolocation is widely considered as the state-of-the-art client- independent approach to estimate the location of an IP address. However, full measurement- based geolocation is prohibitive when applied continuously to the entire Internet to maintain up-to-date IP-to-location mappings. Furthermore, many IP address blocks rarely move, making it unnecessary to perform such full geolocation. The thesis of this dissertation states that we can enable efficient, continuous IP geolocation by identifying clusters of co-located IP addresses and their location stability from latency observations. In this statement, a cluster indicates a group of an arbitrary number of adjacent co- located IP addresses (a few up to a /16). Location stability indicates a measure of how often an IP block changes location. We gain efficiency by allowing IP geolocation systems to geolocate IP addresses as units, and by detecting when a geolocation update is required, optimizations not explored in prior work. We present several studies to support this thesis statement. We first present a study to evaluate the reliability of router geolocation in popular geolocation services, complementing prior work that evaluates end-hosts geolocation in such services. The results show the limitations of these services and the need for better solutions, motivating our work to enable more accurate approaches. Second, we present a method to identify clusters of co-located IP addresses by the similarity in their latency. Identifying such clusters allows us to geolocate them efficiently as units without compromising accuracy. Third, we present an efficient delay-based method to identify IP blocks that move over time, allowing us to recognize when geolocation updates are needed and avoid frequent geolocation of the entire Internet to maintain up-to-date geolocation. In our final study, we present a method to identify cellular blocks by their distinctive variation in latency compared to WiFi and wired blocks. Our method to identify cellular blocks allows a better interpretation of their latency estimates and to study their geographic properties without the need for proprietary data from operators or users.Item Open Access Distributed algorithms for the orchestration of stochastic discrete event simulations(Colorado State University. Libraries, 2014) Sui, Zhiquan, author; Pallickara, Shrideep, advisor; Anderson, Charles, committee member; Böhm, Wim, committee member; Hayne, Stephen, committee memberDiscrete event simulations are widely used in modeling real-world phenomena such as epidemiology, congestion analysis, weather forecasting, economic activity, and chemical reactions. The expressiveness of such simulations depends on the number and types of entities that are modeled and also the interactions that entities have with each other. In the case of stochastic simulations, these interactions are based on the concomitant probability density functions. The more exhaustively a phenomena is modeled, the greater its computational complexity and, correspondingly, the execution time. Distributed orchestration can speed-up such complex simulations. This dissertation considers the problem of distributed orchestration of stochastic discrete event simulations where the computations are irregular and the processing loads stochastic. We have designed a suite of algorithms that target alleviating imbalances between processing elements across synchronization time steps. The algorithms explore different aspects of the orchestration spectrum: static vs. dynamic, reactive vs. proactive, and deterministic vs. learning-based. The feature vector that guides our algorithms include externally observable features of the simulation such as computational footprints and hardware profiles, and features internal to the simulation such as entity states. The learning structure includes basic version of Artificial Neural Network (ANN) and an improved version of ANN. The algorithms are self-tuning and account for the state of the simulation and processing elements while coping with prediction errors. Finally, these algorithms address resource uncertainty as well. Resource uncertainty in such settings occurs due to resource failures, slowdowns, and heterogeneity. Task apportioning, speculative tasks to cope with stragglers, and checkpointing account for the quality and state of both the resource and simulation. The algorithms achieve demonstrably good performance. Despite the irregular nature of these computations, stochasticity in the processing loads, and resource uncertainty execution times are reduced by a factor of 1.8 when the number of resources is doubled.Item Open Access Embedding based clustering of time series data using dynamic time warping(Colorado State University. Libraries, 2022) Mendis, R. A. C. Laksheen, author; Pallickara, Sangmi Lee, advisor; Pallickara, Shrideep, committee member; Hayne, Stephen, committee memberVoluminous time-series observational data impose challenges pertaining to storage and analytics. Identifying patterns in such climate time-series data is critical for many geospatial applications. Over the recent years, clustering has become a key computational technique for identifying patterns/clusters. However, data with complex structures and high dimensions could lead to uninformative clusters and hinder the quality of clustering. In this research, we use the state-of-the-art autoencoders with LSTMs, Bidirectional LSTMs and GRUs to learn highly non-linear mapping functions by training the networks with subsequences of timeseries to perform data reconstruction. Next, we extract the trained encoders to generate embeddings which are lightweight. These embeddings are more space efficient than the original time series data and require less computational power and resources for further processing. In the final step of clustering, instead of using common distance-based metrics like Euclidean distance, we use DTW, an algorithm for computing similarity between time series by ignoring variations in speed, to calculate similarity between the embeddings during the application of k- Means algorithm. Based on Silhouette score, this method generates clusters which are better than other reduction techniques.Item Open Access Individual differences in working memory affect situation awareness(Colorado State University. Libraries, 2011) Gutzwiller, Robert S., author; Clegg, Benjamin A., advisor; DeLosh, Edward, committee member; Hayne, Stephen, committee memberSituation awareness (SA) is a construct that brings together theories of attention, memory, and expertise in an empirical effort to showcase what awareness is and how it is acquired by operators. Endsley (1995a) defined SA in a way that includes many theoretical associations between awareness and specific memory and attention mechanisms. Work characterizing these relationships has been sparse, however, particularly with regard to the influence of working memory (WM) on SA in novices. An experiment was devised which principally investigated novice SA as a theorized function of WM across two distinct tasks; one in which operator attention and perception (Level 1 SA) was assessed, and one in which an operator's ability to respond to events in the future (Level 3 SA) was implicitly assessed. Factors analysis was used and resulting outcomes from three WM tasks loaded well onto one overall WM factor. Findings from 99 participants indicate that WM does have a correlative and predictive relationship with Level 3, but not Level 1 SA. Results reported here contribute to ongoing theory and experimental work in applied psychology with regard to SA and individual differences, showing WM influences awareness in novice performance even in the case where SA measures are not memory-reliant.Item Open Access Monitoring and characterizing application service availability(Colorado State University. Libraries, 2018) Rammer, Daniel P., author; Papadopoulos, Christos, advisor; Ray, Indrajit, committee member; Hayne, Stephen, committee memberReliable detection of global application service availability remains an open problem on the Internet. Some availability issues are diagnosable by an administrator monitoring the service locally, but far more may be identified by monitoring user requests (ie. DNS / SSL misconfiguration). In this work we present Proddle, a distributed application layer measurement framework. The application periodically submits HTTP(S) requests from geographically diverse vantages to gather service availability information at the application layer. Using these measurements we reliably catalog application service unavailability events and identify their cause. Finally, analysis is performed to identify telling event characteristics including event frequency, duration, and visibility.Item Open Access Motion segmentation for feature association(Colorado State University. Libraries, 2010) Pace, Weston Clement, author; Draper, Bruce, advisor; Beveridge, Ross, committee member; Hayne, Stephen, committee memberIn a feature based system physical objects are represented as spatial groups of features. Systems which hope to operate on objects must make associations between features that belong on the same physical object. This paper segments interest points in individual frames of an image sequence using motion models based on image transformations. Experiments evaluate the associations made by these segments against ground truth data. We give an improved version of the existing algorithm which can lead to easier threshold selection in some systems although the ideal threshold is shown to depend on the goal of the segmentation. Lastly we show that the underlying motion of the object is not the only factor in determining the performance of the segmentation.Item Open Access Multimedia transmission rules and encrypted audio and video traffic identification algorithm implemented in P4(Colorado State University. Libraries, 2021) Lu, Jiping, author; Partridge, Craig, advisor; Gersch, Joseph, committee member; Hayne, Stephen, committee memberWith Internet traffic exponentially growing, it is critical for operators to identify voice call and video conference traffic and ensure their quality. Yet, the widespread deployment of encryption protocols makes it challenging to classify encrypted traffic. This research achieves a significant discovery of audio and video traffic transmission rules. Based on the rules, this paper proposes a general audio and video traffic identification algorithm. In order to evaluate this algorithm's performance, we designed and implemented an audio and video traffic identification algorithm in P4 (Programming Protocol-Independent Packet Processors). This promising research reveals that this algorithm achieves 98.98% accuracy on voice call identification. For video conferences, this algorithm achieves 96.24% accuracy on audio data identification and 88.75% accuracy on video data identification. Compared to current pervasive machine learning-based traffic classification approaches, this innovative algorithm bypasses complicated machine learning processes by directly applying audio and video traffic transmission rules on network functions, consuming less computation and memory resources.Item Open Access On the design of a moving target defense framework for the resiliency of critical services in large distributed networks(Colorado State University. Libraries, 2018) Amarnath, Athith, author; Ray, Indrajit, advisor; Ray, Indrakshi, committee member; Hayne, Stephen, committee memberSecurity is a very serious concern in this era of digital world. Protecting and controlling access to secured data and services has given more emphasis to access control enforcement and management. Where, access control enforcement with strong policies ensures the data confidentiality, availability and integrity, protecting the access control service itself is equally important. When these services are hosted on a single server for a lengthy period of time, the attackers have potentially unlimited time to periodically explore and enumerate the vulnerabilities with respect to the configuration of the server and launch targeted attacks on the service. Constant proliferation of cloud usage and distributed systems over the last decade have materialized the possibilities of distributing data or hosting services over a group of servers located in different geographical locations. Existing election algorithms used to provide service continuity hosted in the distributed setup work well in a benign environment. However, these algorithms are not secure against skillful attackers who intends to manipulate or bring down the data or service. In this thesis, we design and implement the protection of critical services, such as access-control reference monitors, using the concept of moving target defense. This concept increases the level of difficulty faced by the attacker to compromise the point of service by periodically moving the critical service among a group of heterogeneous servers, thereby changing the attacker surface and increasing uncertainty and randomness in the point of service chosen. We describe an efficient Byzantine fault-tolerant leader election protocol for small networks that achieves the security and performance goals described in the problem statement. We then extend this solution to large enterprise networks by introducing random walk protocol that randomly chooses a subset of servers taking part in the election protocol.Item Open Access One health in the U.S. military: a review of existing systems and recommendations for the future(Colorado State University. Libraries, 2014) Evans, Rebecca I., author; Salman, Mo, advisor; Lappin, Michael, committee member; Hayne, Stephen, committee member; Peel, Jennifer, committee memberBackground: The merging of the former U.S. Army Veterinary Command (VETCOM) with the former U.S. Army Center for Health Promotion and Preventive Medicine (USACHPPM) into the U.S. Army Public Health Command (USAPHC) in 2011 created an opportunity for the military to fully embrace the One Health concept. That same year, the USAPHC began work on a Zoonotic Disease Report (ZDR) aimed at supporting critical zoonotic disease risk assessments by combining zoonotic disease data from human, entomological, laboratory, and animal data sources. The purpose of this dissertation is to facilitate the creation of a military Zoonotic Disease Surveillance program that combines disease data from both military human and animal sources. Methods: Five of the most commonly used human military medical data systems were systematically reviewed using a standardized template based on Centers for Disease Control and Preventive Medicine (CDC) guidelines. The systems were then compared to each other in order to recommend the one(s) best suited for use in the USAPHC ZDR. The first stage of the comparison focused on each system's ability to meet the specific goals and objectives of the ZDR, whereas the second stage applied capture-recapture methodology to data system queries in order to evaluate each system's data quality (completeness). A pilot study was conducted using Lyme borreliosis to investigate the utility of military pet dogs as sentinel surveillance for zoonotic disease in military populations. Canine data came from 3996 surveys collected from 15 military veterinary facilities from 1 November 2012 through 31 October 2013. Surveys simultaneously collected Borrelia burgdorferi (Bb) seroprevalence and canine risk factor data for each participating pet dog. Human data were obtained by querying the Defense Medical Surveillance System for the same 15 military locations and the same time period. The correlation of military pet dog Bb seroprevalence and military human Lyme disease (borreliosis) data was estimated using the Spearman Rank Correlation. The difference between military pet dog data and civilian pet dog data was examined through the use of the chi-squared test for proportions. Multivariable logistic regression was then used to investigate the potential for identified risk factors to impact the observed association. Results: The comparison of human military medical data systems found the Military Health System Management Analysis and Reporting Tool (M2) data system most completely met the specific goals and objects of the ZDR. In addition, completeness calculation showed the M2 data source to be the most complete source of human data; 55% of total captured cases coming from the M2 system alone. The pilot study found a strong positive correlation between military human borreliosis data and military pet dog Bb seroprevalence data by location (rs = 0.821). The study showed reassuring similarities in pet dog seroprevalence by location for the majority of sites, but also showed meaningful differences between two locations, potentially indicating military pet dogs as more appropriate indicators of Lyme disease risk for military populations than civilian pet dog data. Unfortunately, whether canine Bb seroprevalence is influenced by the distribution of identified risk factors could not be determined due to limited study power. Conclusions: Based on this study M2 was recommended as the primary source of military human medical data for use in the Public Health Command Zoonotic Disease Report. In addition, it was recommended that Service member pet dog data be incorporated as a sensitive and convenient measure of zoonotic disease risk in human military populations. The validity of the data, however, should be evaluated further with either larger sample sizes and/or a zoonotic disease with higher prevalence.Item Open Access Robust resource allocation heuristics for military village search missions(Colorado State University. Libraries, 2012) Maxwell, Paul, author; Siegel, Howard Jay, advisor; Maciejewski, Anthony A., advisor; Potter, Jerry, committee member; Smith, James, committee member; Hayne, Stephen, committee memberOn the modern battlefield, cordon and search missions (a.k.a. village searches) are conducted daily. Creating resource allocations that assign different types of search teams (e.g., soldiers, robots, unmanned aerial vehicles, military working dogs) to target buildings of various sizes is difficult and time consuming in the static planning environment. Efficiently and effectively creating resource allocations when needed during mission execution (a dynamic environment) is even more challenging. There are currently no automated means to create these static and dynamic resource allocations for military use. Military planners create village search plans using reference tables in Field Manuals and personal experience. These manual methods are time consuming and the quality of the plans produced are unpredictable and not quantifiable. This work creates a mathematical model of the village search environment, and proposes static and dynamic resource allocation heuristics using robustness concepts. The result is a mission plan that is resilient against uncertainty in the environment and that saves valuable time for military planning staff.Item Open Access Secure CAN logging and data analysis(Colorado State University. Libraries, 2020) Van, Duy, author; Daily, Jeremy, advisor; Simske, Steve, committee member; Papadopoulos, Christos, committee member; Hayne, Stephen, committee memberController Area Network (CAN) communications are an essential element of modern vehicles, particularly heavy trucks. However, CAN protocols are vulnerable from a cybersecurity perspective in that they have no mechanism for authentication or authorization. Attacks on vehicle CAN systems present a risk to driver privacy and possibly driver safety. Therefore, developing new tools and techniques to detect cybersecurity threats within CAN networks is a critical research topic. A key component of this research is compiling a large database of representative CAN data from operational vehicles on the road. This database will be used to develop methods for detecting intrusions or other potential threats. In this paper, an open-source CAN logger was developed that used hardware and software following the industry security standards to securely log and transmit heavy vehicle CAN data. A hardware prototype demonstrated the ability to encrypt data at over 6 Megabits per second (Mbps) and successfully log all data at 100% bus load on a 1 Mbps baud CAN network in a laboratory setting. An AES-128 Cipher Block Chaining (CBC) encryption mode was chosen. A Hardware Security Module (HSM) was used to generate and securely store asymmetric key pairs for cryptographic communication with a third-party cloud database. It also implemented Elliptic-Curve Cryptography (ECC) algorithms to perform key exchange and sign the data for integrity verification. This solution ensures secure data collection and transmission because only encrypted data is ever stored or transmitted, and communication with the third-party cloud server uses shared, asymmetric secret keys as well as Transport Layer Security (TLS).Item Open Access Supporting localized interactions using named data networking(Colorado State University. Libraries, 2017) Calderon Jaramillo, Andres, author; Papadopoulos, Christos, advisor; Bohm, Wim, committee member; Hayne, Stephen, committee memberA common application in the Internet of Things (IoT) is the access to devices in a specific location. For example, a user may walk into a room and use a mobile device to control the lights or to access the temperature reading. Similarly, things in a location need to advertise their services. For example, when a printer is moved into a room, it needs to make its presence known so that users in that room can access it with minimal configuration. An application developer can achieve these tasks by referring to devices using intuitive names such as /csu/mainCampus/csBuilding/room258/printer/activate. To construct such a name, the developer must make the application aware of the current location. Furthermore, the device must enforce a location-based access control policy to ensure that only users in the same location as the device are allowed to access the device. Our goal is to design a system that leverages the power of names in the Named Data Networking architecture to allow application developers to write code to access and advertise services in a location such as a room or a building. Our system provides a convenient level of indirection so that developers can use names such as /thisRoom/printers/default/activate to initiate a spontaneous interaction with local devices. In this thesis, we describe the system architecture and a prototype implementation. Furthermore, we explore trust and security issues and qualitatively compare our NDN-based solution against an IP-based solution.Item Open Access Switch choice in applied multi-task management(Colorado State University. Libraries, 2014) Gutzwiller, Robert, author; Clegg, Benjamin, advisor; Wickens, Christopher, committee member; Kraiger, Kurt, committee member; Hayne, Stephen, committee memberLittle to date is known concerning how operators make choices in environments where cognitive load is high and they are faced with multiple different tasks to choose from. This dissertation reviewed a large body of voluntary task switching literature concerning basic research into choice in task switching, as well as what literature was available for applied task switching. From this and a prior model, a revised model of task switching choice that takes into account specific task attributes of difficulty, priority, interest and salience, was developed. In the first experiment, it was shown that task difficulty and priority influenced switching behavior. While task attributes were hypothesized to influence switching, a second major influence is time on task. In the second experiment, it was shown that tasks indeed vary in their interruptability over time, and this was related in part to what task was competing for attention as well as the cognitive processing required for the ongoing task performance. In a third experiment, a new methodology was developed to experimentally assess the role of diminishing rate of returns for performing a task. This declining rate was expected (and did result in) a general increase of switching away from an ongoing task over time. In conclusion, while task attributes and time on task play a major role in task switching in the current studies, defining the time period for theorized effects appears to be the next major step toward understanding switching choice behavior. Additionally, though the experiments are novel and certainly make a major contribution, to the extent that behavior is only represented in them, the methodology may miss some amount of `other' task behavior, such as visual sampling.Item Open Access Towards a secure and efficient search over encrypted cloud data(Colorado State University. Libraries, 2016) Strizhov, Mikhail, author; Ray, Indrajit, advisor; Ray, Indrakshi, committee member; McConnell, Ross, committee member; Bieman, James, committee member; Hayne, Stephen, committee memberCloud computing enables new types of services where the computational and network resources are available online through the Internet. One of the most popular services of cloud computing is data outsourcing. For reasons of cost and convenience, public as well as private organizations can now outsource their large amounts of data to the cloud and enjoy the benefits of remote storage and management. At the same time, confidentiality of remotely stored data on untrusted cloud server is a big concern. In order to reduce these concerns, sensitive data, such as, personal health records, emails, income tax and financial reports, are usually outsourced in encrypted form using well-known cryptographic techniques. Although encrypted data storage protects remote data from unauthorized access, it complicates some basic, yet essential data utilization services such as plaintext keyword search. A simple solution of downloading the data, decrypting and searching locally is clearly inefficient since storing data in the cloud is meaningless unless it can be easily searched and utilized. Thus, cloud services should enable efficient search on encrypted data to provide the benefits of a first-class cloud computing environment. This dissertation is concerned with developing novel searchable encryption techniques that allow the cloud server to perform multi-keyword ranked search as well as substring search incorporating position information. We present results that we have accomplished in this area, including a comprehensive evaluation of existing solutions and searchable encryption schemes for ranked search and substring position search.