Browsing by Author "Ray, Indrajit, advisor"
Now showing 1 - 17 of 17
Results Per Page
Sort Options
Item Open Access A heuristic-based approach to automatically extract personalized attack graph related concepts from vulnerability descriptions(Colorado State University. Libraries, 2017) Mukherjee, Subhojeet, author; Ray, Indrajit, advisor; Ray, Indrakshi, committee member; Byrne, Zinta, committee memberComputer users are not safe, be it at home or in public places. Public networks are more often administered by trained individuals who attempt to fortify those networks using strong administrative skills, state-of-the-art security tools and meticulous vigilance. This is, however, not true for home computer users. Being largely untrained they are often the most likely targets of cyber attacks. These attacks are often executed in cleverly interleaved sequences leading to the eventual goal of the attacker. The Personalized Attack Graphs (PAG) introduced by Ubranska et al. [24, 25, 32] can leverage the interplay of system configurations, attacker and user actions to represent a cleverly interleaved sequence of attacks on a single system. An instance of the PAG can be generated manually by observing system configurations of a computer and collating them with possible security threats which can exploit existing system vulnerabilities and/or misconfigurations. However, the amount of manual labor involved in creating and periodically updating the PAG can be very high. As a result, attempt should be made to automate the process of generating the PAG. Information required to generate these graphs are available on the Internet in the form of vulnerability descriptions. This information is, however, almost always written in natural language and lacks any form of structure. In this thesis, we propose an unsupervised heuristic-based approach which parses vulnerability descriptions and extracts instances of PAG related concepts like system configurations, attacker and user actions. Extracted concepts can then be interleaved to generate the Personalized Attack Graph.Item Open Access A tale of 'T' metrics: choosing tradeoffs in multiobjective planning(Colorado State University. Libraries, 2013) Roberts, Mark, author; Howe, Adele, advisor; Ray, Indrajit, advisor; Whitley, Darrell, committee member; Turk, Daniel, committee memberTo view the abstract, please see the full text of the document.Item Open Access A vector model of trust to reason about trustworthiness of entities for developing secure systems(Colorado State University. Libraries, 2008) Chakraborty, Sudip, author; Ray, Indrajit, advisor; Ray, Indrakshi, advisorSecurity services rely to a great extent on some notion of trust. In all security mechanisms there is an implicit notion of trustworthiness of the involved entities. Security technologies like cryptographic algorithms, digital signature, access control mechanisms provide confidentiality, integrity, authentication, and authorization thereby allow some level of 'trust' on other entities. However, these techniques provide only a restrictive (binary) notion of trust and do not suffice to express more general concept of 'trustworthiness'. For example, a digitally signed certificate does not tell whether there is any collusion between the issuer and the bearer. In fact, without a proper model and mechanism to evaluate and manage trust, it is hard to enforce trust-based security decisions. Therefore there is a need for more generic model of trust. However, even today, there is no accepted formalism for specifying and reasoning with trust. Secure systems are built under the premise that concepts like "trustworthiness" or "trusted" are well understood, without agreeing to what "trust" means, what constitutes trust, how to measure it, how to compare or compose two trusts, and how a computed trust can help to make a security decision.Item Open Access Assessing vulnerabilities in software systems: a quantitative approach(Colorado State University. Libraries, 2007) Alhazmi, Omar, author; Malaiya, Yashwant K., advisor; Ray, Indrajit, advisorSecurity and reliability are two of the most important attributes of complex software systems. It is now common to use quantitative methods for evaluating and managing reliability. Software assurance requires similar quantitative assessment of software security, however only limited work has been done on quantitative aspects of security. The analogy with software reliability can help developing similar measures for software security. However, there are significant differences that need to be identified and appropriately acknowledged. This work examines the feasibility of quantitatively characterizing major attributes of security using its analogy with reliability. In particular, we investigate whether it is possible to predict the number of vulnerabilities that can potentially be identified in a current or future release of a software system using analytical modeling techniques.Item Open Access Automated security analysis of the home computer(Colorado State University. Libraries, 2014) Urbanska, Malgorzata, author; Ray, Indrajit, advisor; Howe, Adele E., advisor; Byrne, Zinta, committee memberHome computer users pose special challenges to the security of their machines. Often home computer users do not realize that their computer activities have repercussions on computer security. Frequently, they are not aware about their role in keeping their home computer secure. Therefore, security analysis solutions for a home computer must differ significantly from standard security solutions. In addition to considering the properties of a single system, the characteristics of a home user have to be deliberated. Attack Graphs (AGs) are models that have been widely used for security analysis. A Personalized Attack Graph (PAG) extends the traditional AGs for this purpose. It characterizes the interplay between vulnerabilities, user actions, attacker strategies, and system activities. Success of such security analysis depends on the level of detailed information used to build the PAG. Because the PAG can have hundreds of elements and manual analysis can be error-prone and tedious, automation of this process is an essential component in the security analysis for the home computer user. Automated security analysis, which applies the PAG, requires information about user behavior, attacker and system actions, and vulnerabilities that are present in the home computer. In this thesis, we expatiate on 1) modeling home user behavior in order to obtain user specific information, 2) analyzing vulnerability information resources to get the most detailed vulnerability descriptions, and 3) transforming vulnerability information into a format useful for automated construction of the PAG. We propose the Bayesian User Action model that quantitatively represents the relationships between different user characteristics and provides the likelihood of a user taking a specific cyber related action. This model complements the PAG by delivering information about the home user. We demonstrate how different user behavior affects exploit likelihood in the PAG. We compare different vulnerability information sources in order to identify the best source for security analysis of the home computer. We calculate contextual similarity of the vulnerability descriptions to identify the same vulnerabilities from different vulnerability databases. We measure the similarity of vulnerability descriptions of the same vulnerability from multiple sources in order to identify any additional information that can be used to construct the PAG. We demonstrate a methodology of transforming a textual vulnerability description into a more structured format. We use Information Extraction (IE) techniques that are based on regular expression rules and dictionaries of keywords. We extract five types of information: infected software, attacker/user/system preconditions, and postconditions of exploiting vulnerabilities. We evaluate the performance of our IE system by measuring accuracy for each type of extracted information. Experiments on influence of user profile on the PAG show that probability of exploits differ depending on user personality. Results also suggest that exploits are sensitive to user actions and probability of exploits can change depending on evidence configuration. The results of similarity analysis of vulnerability descriptions show that contextual similarity can be used to identify the same vulnerability across different vulnerability databases. The results also show that the syntactic similarity does not imply additional vulnerability information. Results from performance analysis of our IE system show that it works very well for the majority of vulnerability descriptions. The possible issues with extraction are mainly caused by: 1) challenging to express vulnerability descriptions by regular expressions and 2) lack of explicitly included information in vulnerability descriptions.Item Open Access Automatic endpoint vulnerability detection of Linux and open source using the National Vulnerability Database(Colorado State University. Libraries, 2008) Whyman, Paul Arthur, author; Ray, Indrajit, advisor; Krawetz, Neal, committee member; Whitley, L. Darrell, committee member; Hayne, Stephen, committee memberA means to reduce security risks to a network of computers is to manage which computers can participate on a network, and control the participation of systems that do not conform to the security policy. Requiring systems to demonstrate their compliance to the policy can limit the risk of allowing uncompiling systems access to trusted networks. One aspect of determining the risk a system represents is patch-level, a comparison between the availability of vendor security patches and their application on a system. A fully updated system has all available patches applied. Using patch level as a security policy metric, systems can evaluate as compliant, yet may still contain known vulnerabilities, representing real risks of exploitation. An alternative approach is a direct comparison of system software to public vulnerability reports contained in the National Vulnerability Database (NVD). This approach may produce a more accurate assessment of system risk for several reasons including removing the delay caused by vendor patch development and by analyzing system risk using vender-independent vulnerability information. This work demonstrates empirically that current, fully patched systems contain numerous software vulnerabilities. This technique can apply to platforms other than those of Open Source origin. This alternative method, which compares system software components to lists of known software vulnerabilities, must reliably match system components to those listed as vulnerable. This match requires a precise identification of both the vulnerability and the software that the vulnerability affects. In the process of this analysis, significant issues arose within the NVD pertaining to the presentation of Open Source vulnerability information. Direct matching is not possible using the current information in the NVD. Furthermore, these issues support the belief that the NVD is not an accurate data source for popular statistical comparisons between closed and open source software.Item Open Access CPS security testbed: requirement analysis, prototype design and protection framework(Colorado State University. Libraries, 2023) Talukder, Md Rakibul Hasan, author; Ray, Indrajit, advisor; Malaiya, Yashwant, committee member; Vijayasarathy, Leo, committee memberTestbeds are a practical way to perform security exercises on cyber physical systems (CPS) to understand vulnerabilities and the progression/impact of cyber-attacks. However, it is challenging to replicate a large CPS, such as nuclear power plant or an electrical power grid, within the confines of a laboratory that would allow security experiments to be carried out. Thus, software-based simulations are getting increasingly popular as opposed to hardware-in-the-loop based simulations for CPS that form a critical infrastructure. Unfortunately, a software-based CPS testbed oriented towards security-centric experiments requires a careful re-examination of requirements and architectural design different from a CPS testbed for non-security related experiments. On a security-focused testbed there is a need to run real attack scripts for red-teaming/blue-teaming exercises, which are, in the strictest sense of the term, malicious in nature. Thus, there is a need to protect the testbed itself from these attack experiments that have the potential to go awry. The overall effect of an exploit on the whole system or vulnerabilities at communication channels needs to be particularly explored while building a simulator for a security-centric CPS. Besides, when multiple experiments are conducted on the same testbed, there is a need to maintain isolation among these experiments so that no experiment can accidentally or maliciously compromise others and affect the fidelity of those results. Specific security experiment-related supports are essential when designing such a testbed but integrating a software-based simulator within the testbed to provide necessary experiment support is challenging. In this thesis, we make three contributions. First, we present the design of an ideal testbed based on a set of requirements and supports that we have identified, focusing specifically on security experiment as the primary use case. Next, following these requirements analysis, we integrate a software-based simulator (Generic Pressurized Water Reactor) into a testbed design by modifying the implementation architecture to allow the execution of attack experiments on different networking architectures and protocols. Finally, we describe a novel security architecture and framework to ensure the protection of security-related experiments on a CPS testbed.Item Open Access Detecting non-secure memory deallocation with CBMC(Colorado State University. Libraries, 2021) Singh, Mohit K., author; Prabhu, Vinayak, advisor; Ray, Indrajit, advisor; Ghosh, Sudipto, committee member; Ray, Indrakshi, committee member; Simske, Steve, committee memberScrubbing sensitive data before releasing memory is a widely recommended but often ignored programming practice for developing secure software. Consequently, sensitive data such as cryptographic keys, passwords, and personal data, can remain in memory indefinitely, thereby increasing the risk of exposure to hackers who can retrieve the data using memory dumps or exploit vulnerabilities such as Heartbleed and Etherleak. We propose an approach for detecting a specific memory safety bug called Improper Clearing of Heap Memory Before Release, referred to as Common Weakness Enumeration 244. The CWE-244 bug in a program allows the leakage of confidential information when a variable is not wiped before heap memory is freed. Our approach uses the CBMC model checker to detect this weakness and is based on instrumenting the program using (1) global variable declarations that track and monitor the state of the program variables relevant for CWE-244, and (2) assertions that help CBMC to detect unscrubbed memory. We develop a tool, SecMD-Checker, implementing our instrumentation based algorithm, and we provide experimental validation on the Juliet Test Suite that the tool is able to detect all the CWE-244 instances present in the test suite. The proposed approach has the potential to work with other model checkers and can be extended for detecting other weaknesses that require variable tracking and monitoring, such as CWE-226, CWE-319, and CWE-1239.Item Open Access Digital signatures to ensure the authenticity and integrity of synthetic DNA molecules(Colorado State University. Libraries, 2019) Kar, Diptendu Mohan, author; Ray, Indrajit, advisor; Ray, Indrakshi, advisor; Vijayasarathy, Leo R., committee member; Peccoud, Jean, committee memberDNA synthesis has become increasingly common, and many synthetic DNA molecules are licensed intellectual property (IP). DNA samples are shared between academic labs, ordered from DNA synthesis companies and manipulated for a variety of different purposes, mostly to study their properties and improve upon them. However, it is not uncommon for a sample to change hands many times with very little accompanying information and no proof of origin. This poses significant challenges to the original inventor of a DNA molecule, trying to protect her IP rights. More importantly, following the anthrax attacks of 2001, there is an increased urgency to employ microbial forensic technologies to trace and track agent inventories. However, attribution of physical samples is next to impossible with existing technologies. In this research, we describe our efforts to solve this problem by embedding digital signatures in DNA molecules synthesized in the laboratory. We encounter several challenges that we do not face in the digital world. These challenges arise primarily from the fact that changes to a physical DNA molecule can affect its properties, random mutations can accumulate in DNA samples over time, DNA sequencers can sequence (read) DNA erroneously and DNA sequencing is still relatively expensive (which means that laboratories would prefer not to read and re-read their DNA samples to get error-free sequences). We address these challenges and present a digital signature technology that can be applied to synthetic DNA molecules in living cells.Item Open Access In-ComVec Sec: in-vehicle security for medium and heavy duty vehicles(Colorado State University. Libraries, 2017) Mukherjee, Subhojeet, author; Ray, Indrakshi, advisor; Ray, Indrajit, advisorInside today's vehicles, embedded electronic control units (ECUs) manage different operations by communicating via the serial CAN bus. It has been shown that the CAN bus can be accessed by remote attackers to disrupt/manipulate normal vehicular operations. Heavy-duty vehicles, unlike their lighter counterparts, follow a common set of communication standards (SAE J1939) and are often used for transporting critical goods, thereby increasing their asset value. This work deals with the internal communication security of heavy-duty vehicles and is aimed at detecting /preventing malicious activities that can adversely affect human lives and company fortunes reliant on such modes of transportation.Item Open Access On designing large, secure and resilient networked systems(Colorado State University. Libraries, 2019) Mulamba Kadimbadimba, Dieudonné, author; Ray, Indrajit, advisor; Ray, Indrakshi, committee member; McConnell, Ross, committee member; Vijayasarathy, Leo, committee memberDefending large networked systems against rapidly evolving cyber attacks is challenging. This is because of several factors. First, cyber defenders are always fighting an asymmetric warfare: While the attacker needs to find just a single security vulnerability that is unprotected to launch an attack, the defender needs to identify and protect against all possible avenues of attacks to the system. Various types of cost factors, such as, but not limited to, costs related to identifying and installing defenses, costs related to security management, costs related to manpower training and development, costs related to system availability, etc., make this asymmetric warfare even challenging. Second, newer and newer cyber threats are always emerging - the so called zero-day attacks. It is not possible for a cyber defender to defend against an attack for which defenses are yet unknown. In this work, we investigate the problem of designing large and complex networks that are secure and resilient. There are two specific aspects of the problem that we look into. First is the problem of detecting anomalous activities in the network. While this problem has been variously investigated, we address the problem differently. We posit that anomalous activities are the result of mal-actors interacting with non mal-actors, and such anomalous activities are reflected in changes to the topological structure (in a mathematical sense) of the network. We formulate this problem as that of Sybil detection in networks. For our experimentation and hypothesis testing we instantiate the problem as that of Sybil detection in on-line social networks (OSNs). Sybil attacks involve one or more attackers creating and introducing several mal-actors (fake identities in on-line social networks), called Sybils, into a complex network. Depending on the nature of the network system, the goal of the mal-actors can be to unlawfully access data, to forge another user's identity and activity, or to influence and disrupt the normal behavior of the system. The second aspect that we look into is that of building resiliency in a large network that consists of several machines that collectively provide a single service to the outside world. Such networks are particularly vulnerable to Sybil attacks. While our Sybil detection algorithms achieve very high levels of accuracy, they cannot guarantee that all Sybils will be detected. Thus, to protect against such "residual" Sybils (that is, those that remain potentially undetected and continue to attack the network services), we propose a novel Moving Target Defense (MTD) paradigm to build resilient networks. The core idea is that for large enterprise level networks, the survivability of the network's mission is more important than the security of one or more of the servers. We develop protocols to re-locate services from server to server in a random way such that before an attacker has an opportunity to target a specific server and disrupt it’s services, the services will migrate to another non-malicious server. The continuity of the service of the large network is thus sustained. We evaluate the effectiveness of our proposed protocols using theoretical analysis, simulations, and experimentation. For the Sybil detection problem we use both synthetic and real-world data sets. We evaluate the algorithms for accuracy of Sybil detection. For the moving target defense protocols we implement a proof-of-concept in the context of access control as a service, and run several large scale simulations. The proof-of- concept demonstrates the effectiveness of the MTD paradigm. We evaluate the computation and communication complexity of the protocols as we scale up to larger and larger networks.Item Open Access On the design of a moving target defense framework for the resiliency of critical services in large distributed networks(Colorado State University. Libraries, 2018) Amarnath, Athith, author; Ray, Indrajit, advisor; Ray, Indrakshi, committee member; Hayne, Stephen, committee memberSecurity is a very serious concern in this era of digital world. Protecting and controlling access to secured data and services has given more emphasis to access control enforcement and management. Where, access control enforcement with strong policies ensures the data confidentiality, availability and integrity, protecting the access control service itself is equally important. When these services are hosted on a single server for a lengthy period of time, the attackers have potentially unlimited time to periodically explore and enumerate the vulnerabilities with respect to the configuration of the server and launch targeted attacks on the service. Constant proliferation of cloud usage and distributed systems over the last decade have materialized the possibilities of distributing data or hosting services over a group of servers located in different geographical locations. Existing election algorithms used to provide service continuity hosted in the distributed setup work well in a benign environment. However, these algorithms are not secure against skillful attackers who intends to manipulate or bring down the data or service. In this thesis, we design and implement the protection of critical services, such as access-control reference monitors, using the concept of moving target defense. This concept increases the level of difficulty faced by the attacker to compromise the point of service by periodically moving the critical service among a group of heterogeneous servers, thereby changing the attacker surface and increasing uncertainty and randomness in the point of service chosen. We describe an efficient Byzantine fault-tolerant leader election protocol for small networks that achieves the security and performance goals described in the problem statement. We then extend this solution to large enterprise networks by introducing random walk protocol that randomly chooses a subset of servers taking part in the election protocol.Item Open Access On the design of a secure and anonymous publish-subscribe system(Colorado State University. Libraries, 2012) Mulamba Kadimbadimba, Dieudonne, author; Ray, Indrajit, advisor; Ray, Indrakshi, committee member; Vijayasarathy, Leo, committee memberThe reliability and the high availability of data have made online servers very popular among single users or organizations like hospitals, insurance companies or administrations. This has led to an increased dissemination of personal data on public servers. These online companies are increasingly adopting the publish-subscribe as a new model for storing and managing data on a distributed network. While bringing some real improvement in the way these online companies store and manage data in a dynamic and distributed environment, publish-subscribe is also bringing some new challenges of security and privacy. The centralization of personal data on public servers has raised citizens' concerns about their privacy. Several security breaches involving the leakage of personal data have occured, showing us how crucial this issue has become. A significant amount of work has been done in the field of securing the publish-subscribe. However, all of this research assumes that the server is a trusted entity, an assumption which ignores the fact that this server can be honest but curious. This leads to the need to develop a means to protect publishers and subscribers from server curiosity. One solution to this problem could be to anonymize all communications involving publishers and subscribers. This solution will raise in turn another issue which involves how to allow a subscriber to query a file that was anonymously uploaded in the server by the publisher. In this work, we propose an implementation of a communication protocol that allows users to asynchronously and anonymously exchange messages and that also supports a secure deletion of messages.Item Open Access Privacy preserving linkage and sharing of sensitive data(Colorado State University. Libraries, 2018) Lazrig, Ibrahim Meftah, author; Ray, Indrakshi, advisor; Ray, Indrajit, advisor; Malaiya, Yashwant, committee member; Vijayasarathy, Leo, committee member; Ong, Toan, committee memberSensitive data, such as personal and business information, is collected by many service providers nowadays. This data is considered as a rich source of information for research purposes that could benet individuals, researchers and service providers. However, because of the sensitivity of such data, privacy concerns, legislations, and con ict of interests, data holders are reluctant to share their data with others. Data holders typically lter out or obliterate privacy related sensitive information from their data before sharing it, which limits the utility of this data and aects the accuracy of research. Such practice will protect individuals' privacy; however it prevents researchers from linking records belonging to the same individual across dierent sources. This is commonly referred to as record linkage problem by the healthcare industry. In this dissertation, our main focus is on designing and implementing ecient privacy preserving methods that will encourage sensitive information sources to share their data with researchers without compromising the privacy of the clients or aecting the quality of the research data. The proposed solution should be scalable and ecient for real-world deploy- ments and provide good privacy assurance. While this problem has been investigated before, most of the proposed solutions were either considered as partial solutions, not accurate, or impractical, and therefore subject to further improvements. We have identied several issues and limitations in the state of the art solutions and provided a number of contributions that improve upon existing solutions. Our rst contribution is the design of privacy preserving record linkage protocol using semi-trusted third party. The protocol allows a set of data publishers (data holders) who compete with each other, to share sensitive information with subscribers (researchers) while preserving the privacy of their clients and without sharing encryption keys. Our second contribution is the design and implementation of a probabilistic privacy preserving record linkage protocol, that accommodates discrepancies and errors in the data such as typos. This work builds upon the previous work by linking the records that are similar, where the similarity range is formally dened. Our third contribution is a protocol that performs information integration and sharing without third party services. We use garbled circuits secure computation to design and build a system to perform the record linkages between two parties without sharing their data. Our design uses Bloom lters as inputs to the garbled circuits and performs a probabilistic record linkage using the Dice coecient similarity measure. As garbled circuits are known for their expensive computations, we propose new approaches that reduce the computation overhead needed, to achieve a given level of privacy. We built a scalable record linkage system using garbled circuits, that could be deployed in a distributed computation environment like the cloud, and evaluated its security and performance. One of the performance issues for linking large datasets is the amount of secure computation to compare every pair of records across the linked datasets to nd all possible record matches. To reduce the amount of computations a method, known as blocking, is used to lter out as much as possible of the record pairs that will not match, and limit the comparison to a subset of the record pairs (called can- didate pairs) that possibly match. Most of the current blocking methods either require the parties to share blocking keys (called blocks identiers), extracted from the domain of some record attributes (termed blocking variables), or share reference data points to group their records around these points using some similarity measures. Though these methods reduce the computation substantially, they leak too much information about the records within each block. Toward this end, we proposed a novel privacy preserving approximate blocking scheme that allows parties to generate the list of candidate pairs with high accuracy, while protecting the privacy of the records in each block. Our scheme is congurable such that the level of performance and accuracy could be achieved according to the required level of privacy. We analyzed the accuracy and privacy of our scheme, implemented a prototype of the scheme, and experimentally evaluated its accuracy and performance against dierent levels of privacy.Item Open Access Searching over encrypted data(Colorado State University. Libraries, 2017) Moataz, Tarik, author; Ray, Indrajit, advisor; Ray, Indrakshi, advisor; McConnell, Ross, committee member; Wang, Haonan, committee member; Boulahia Cuppens, Nora, committee member; Cuppens, Frédéric, committee memberCloud services offer reduced costs, elasticity and a promised unlimited managed storage space that attract many end-users. File sharing, collaborative platforms, email platforms, back-up servers and file storage are some of the services that set the cloud as an essential tool for everyday use. Currently, most operating systems offer built-in outsourced cloud storage applications, by design, such as One Drive and iCloud, as natural substitutes succeeding to the local storage. However, many users, even those willing to use the aforementioned cloud services, remain reluctant towards fully adopting cloud outsourced storage and services. Concerns related to data confidentiality rise uncertainty for users maintaining sensitive information. There are many, recurrent, worldwide data breaches that led to the disclosure of users sensitive information. To name a few: a breach of Yahoo late 2014 and publicly announced in September 2016, known as the largest data breach of Internet history, led to the disclosure of more than 500 million user accounts; a breach of health insurers, Anthem in February 2015 and Premera BlueCross BlueShield in March 2015, that led to the disclosure of credit card information, bank account information, social security numbers, data income and more information for more than millions of customers and users. A traditional countermeasure for such devastating attacks consists of encrypting users data so that even if a security breach occurs, the attackers cannot get any information from the data. Unfortunately, this solution impedes most of cloud services, and in particular, searching on outsourced data. Researchers therefore got interested in the following question: "how to search on outsourced encrypted data while preserving efficient communication, computation and storage overhead?" This question had several solutions, mostly based on cryptographic primitives, offering numerous security and efficiency guarantees. While this problem has been explicitly identified for more than a decade, many research dimensions remain unsolved. The main goal of this thesis is to come up with practical constructions that are (1) suitable for real life deployments verifying necessary efficiency requirements, but also, (2) providing good security insurances. Throughout our research investigation, we identified symmetric searchable encryption (SSE) and oblivious RAM (ORAM) as the two potential and main cryptographic primitives candidate for real life settings. We have recognized several challenges and issues inherent to these constructions and provided a number of contributions that improve upon the state of the art. First, we contributed to make SSE schemes more expressive by enabling Boolean, semantic, and substring queries. Practitioners, however, need to be very careful about the provided balance between the security leakage and the degree of desired expressiveness. Second, we improve ORAM's bandwidth by introducing a novel recursive data structure and a new eviction procedure for the tree-based class of ORAM constructions, but also, we introduce the concept of resizability in ORAM which is a required feature for cloud storage elasticity.Item Open Access Towards a secure and efficient search over encrypted cloud data(Colorado State University. Libraries, 2016) Strizhov, Mikhail, author; Ray, Indrajit, advisor; Ray, Indrakshi, committee member; McConnell, Ross, committee member; Bieman, James, committee member; Hayne, Stephen, committee memberCloud computing enables new types of services where the computational and network resources are available online through the Internet. One of the most popular services of cloud computing is data outsourcing. For reasons of cost and convenience, public as well as private organizations can now outsource their large amounts of data to the cloud and enjoy the benefits of remote storage and management. At the same time, confidentiality of remotely stored data on untrusted cloud server is a big concern. In order to reduce these concerns, sensitive data, such as, personal health records, emails, income tax and financial reports, are usually outsourced in encrypted form using well-known cryptographic techniques. Although encrypted data storage protects remote data from unauthorized access, it complicates some basic, yet essential data utilization services such as plaintext keyword search. A simple solution of downloading the data, decrypting and searching locally is clearly inefficient since storing data in the cloud is meaningless unless it can be easily searched and utilized. Thus, cloud services should enable efficient search on encrypted data to provide the benefits of a first-class cloud computing environment. This dissertation is concerned with developing novel searchable encryption techniques that allow the cloud server to perform multi-keyword ranked search as well as substring search incorporating position information. We present results that we have accomplished in this area, including a comprehensive evaluation of existing solutions and searchable encryption schemes for ranked search and substring position search.Item Open Access Towards an efficient vulnerability analysis methodology for better security risk management(Colorado State University. Libraries, 2010) Poolsappasit, Nayot, author; Ray, Indrajit, advisor; Ray, Indrakshi, 1966-, advisor; McConnell, Ross M., committee member; Jayasumana, Anura P., committee memberRisk management is a process that allows IT managers to balance between cost of the protective measures and gains in mission capability. A system administrator has to make a decision and choose an appropriate security plan that maximizes the resource utilization. However, making the decision is not a trivial task. Most organizations have tight budgets for IT security; therefore, the chosen plan must be reviewed as thoroughly as other management decisions. Unfortunately, even the best-practice security risk management frameworks do not provide adequate information for effective risk management. Vulnerability scanning and penetration testing that form the core of traditional risk management, identify only the set of system vulnerabilities. Given the complexity of today's network infrastructure, it is not enough to consider the presence or absence of vulnerabilities in isolation. Materializing a threat strongly requires the combination of multiple attacks using different vulnerabilities. Such a requirement is far beyond the capabilities of current day vulnerability scanners. Consequently, assessing the cost of an attack or cost of implementing appropriate security controls is possible only in a piecemeal manner. In this work, we develop and formalize new network vulnerability analysis model. The model encodes in a concise manner, the contributions of different security conditions that lead to system compromise. We extend the model with a systematic risk assessment methodology to support reasoning under uncertainty in an attempt to evaluate the vulnerability exploitation probability. We develop a cost model to quantify the potential loss and gain that can occur in a system if certain conditions are met (or protected). We also quantify the security control cost incurred to implement a set of security hardening measures. We propose solutions for the system administrator's decision problems covering the area of the risk analysis and risk mitigation analysis. Finally, we extend the vulnerability assessment model to the areas of intrusion detection and forensic investigation.