Department of Computer Science
Permanent URI for this community
This digital collection contains faculty/student publications, theses, and dissertations from the Department of Computer Science.
Browse
Browsing Department of Computer Science by Issue Date
Now showing 1 - 20 of 323
Results Per Page
Sort Options
Item Open Access Essential competencies of exceptional professional software engineers(Colorado State University. Libraries, 1991) Turley, Richard T., author; Johnson, Gerry, advisor; Bieman, James M., advisor; Olender, Kurt, committee member; Neidt, Charles O., committee member; Walicki, Jack, committee memberThis dissertation presents a differential study of exceptional and non-exceptional professional software engineers in the work environment. The first phase of the study reports an in-depth review of 20 engineers. The study reports biographical data, Myers-Briggs Type Indicator test results, and Critical Incident Interview data for 10 exceptional and 10 non-exceptional subjects. Phase 1 concludes with a description of 38 essential competencies of software engineers. Phase 2 of this study surveys 129 engineers. Phase 2 reports biographical data for the sample and concludes that the only simple demographic predictor of performance is years of experience in software. This variable is able to correctly classify 63% of the cases studied. Phase 2 also has the participants complete a Q-Sort of the 38 competencies identified in Phase 1. Nine of these competencies are differentially related to engineer performance. A10 variable Canonical Discriminant Function is derived which is capable of correctly classifying 81% of the cases studied. This function consists of three biographical variables and seven competencies. The competencies related to Personal Attributes and Interpersonal Skills are identified as the most significant factors contributing to performance differences.Item Open Access A multi-level code comprehension model for large scale software(Colorado State University. Libraries, 1996) Vans, A. Marie, author; von Mayrhauser, Anneliese, advisor; Bieman, James, committee member; Olender, Kurt, committee member; Volbrecht, Vicki, committee memberFor the past 20 years researchers have studied how programmers understand code they did not write. Most of this research has concentrated on small-scale code understanding. We consider it necessary to design studies that observe programmers working on large-scale code in production environments. We describe the design and implementation of such a study which included 11 maintenance engineers working on various maintenance tasks. The objective is to build a theory based on observations of programmers working on real tasks. Results show that programmers understand code at different levels of abstraction. Expertise in the application domain, amount of prior experience with the code, and task can determine the types of actions taken during maintenance, the level of abstraction at which the programmer works, and the information needed to complete a maintenance task. A better grasp of how programmers understand large scale code and what is most efficient and effective can lead to better tools, better maintenance guidelines, and documentation.Item Open Access An integrated method for improving testing effectiveness and efficiency(Colorado State University. Libraries, 2000) Stringfellow, Catherine V., author; Von Mayrhauser, Anneliese, advisor; Bieman, James M., committee member; Zimmerman, Donald E., committee member; France, Robert B., committee memberThe aim of testing is to find errors and to find them as early as possible. Specifically, system testing should uncover more errors before release to reduce the number of errors found in post-release. System testing should also prevent the release of the products that would result in discovery of many post-release errors. Studies indicate that post-release errors cost more to fix than errors found earlier in the life cycle. The effectiveness and efficiency of system testing depends on many factors, not only the expertise and quality of the tester and the techniques they employ. This dissertation develops and integrated method using various techniques that will improve testing effectiveness and efficiency. Some of these techniques already exist, but are applied in a new or different way. The integrated method enables root cause analysis of post-release problems by tracing these problems to one or more factors that influence system testing efficiency. Development defect data help to identify which parts of the software should be tested more intensely and earlier because they were fault-prone in development. Based on assessment results, testers can develop testing guidelines to make system test more effective. A case study applies this evaluation instrument to existing project data from large software product (medical record system). Successive releases of the software product validate the method. During system testing, testers may need to determine quantitatively whether to continue testing or to stop, recommending release. Early stopping could decrease the cost of testing, but has the disadvantage of possibly missing problems that would have been detected, had system testing continued. Testers need to evaluate the efficiency of currently used methods and to improve the efficiency of future testing efforts. This dissertation develops empirical techniques to determine stopping points during testing. It proposed a new selection method for software reliability growth model(s) that can be used to make release decision. The case study compares and evaluates these techniques on actual test result data from industry. Quality assessment of multiple releases of the same product forms the basis of longitudinal decisions, such as re-engineering. Techniques using data from prior releases help to identify parts of the system that are consistently problematic. This information aids in developing additional testing guidelines for future releases of the product. This dissertation adds to a study that adapted a reverse architecting technique to identify fault relationships among system components based on whether they are involved in the same defect fix. The case study applies this technique to identify those parts of the software that need to be tested more. Results of the case study demonstrate that the integrated method can improve the effectiveness and efficiency of system test. The method identified problematic software components using data from prior releases and development. Results of prioritizing show that fault-prone components tested earlier reveal more defects earlier. Development should, therefore, have more time to fix these defects before release. The method was also able to estimate remaining defect content. The estimates were used to make release decisions. Based on data from post-release and interviews with the test manager, the method recommended the right release decisions.Item Open Access A synthesis of reinforcement learning and robust control theory(Colorado State University. Libraries, 2000) Kretchmar, R. Matthew, author; Anderson, Charles, advisor; Howe, Adele E., committee member; Whitley, L. Darrell, committee member; Young, Peter M., committee member; Hittle, Douglas C., committee memberThe pursuit of control algorithms with improved performance drives the entire control research community as well as large parts of the mathematics, engineering, and artificial intelligence research communities. A fundamental limitation on achieving control performance is the conflicting requirement of maintaining system stability. In general, the more aggressive is the controller, the better the control performance but also the closer to system instability. Robust control is a collection of theories, techniques, the tools that form one of the leading edge approaches to control. Most controllers are designed not on the physical plant to be controlled, but on a mathematical model of the plant; hence, these controllers often do not perform well on the physical plant and are sometimes unstable. Robust control overcomes this problem by adding uncertainty to the mathematical model. The result is a more general, less aggressive controller which performs well on the both the model and the physical plant. However, the robust control method also sacrifices some control performance in order to achieve its guarantees of stability. Reinforcement learning based neural networks offer some distinct advantages for improving control performance. Their nonlinearity enables the neural network to implement a wider range of control functions, and their adaptability permits them to improve control performance via on-line, trial-and-error learning. However, neuro-control is typically plagued by a lack of stability guarantees. Even momentary instability cannot be tolerated in most physical plants, and thus, the threat of instability prohibits the application of neuro-control in many situations. In this dissertation, we develop a stable neuro-control scheme by synthesizing the two fields of reinforcement learning and robust control theory. We provide a learning system with many of the advantages of neuro-control. Using functional uncertainty to represent the nonlinear and time-varying components of the neuro networks, we apply the robust control techniques to guarantee the stability of our neuro-controller. Our scheme provides stable control not only for a specific fixed-weight, neural network, but also for a neuro-controller in which the weights are changing during learning. Furthermore, we apply our stable neuro-controller to several control tasks to demonstrate that the theoretical stability guarantee is readily applicable to real-life control situations. We also discuss several problems we encounter and identify potential avenues of future research.Item Open Access Garbage elimination in SA-C host code(Colorado State University. Libraries, 2001) Segreto, Steve, author; Bohm, Wim, advisor; Draper, Bruce, committee memberSingle-assignment C (SA-C) is a functional programming language with a rich instruction set designed to create and manipulate arrays using array slices and window generators. It is well-suited for the fields of graphics, AI and image processing within reconfigurable computing environments. Garbage is defined as any SA-C array data which is unused or unreferenced in the host code program heap at any time. Garbage must not be created and it must be freed as soon as possible. In this paper it will be shown that the single-assignment properties of the language create garbage when single-assignment occurs in loops. This behavior is studied and a static solution is presented called pointer reuse. The non-circular aliases resulting from strict single-assignment alias creation coupled with the side-effect free nature of statement blocks lead to a dynamic reference counting technique which can provide immediate elimination of garbage. Aliases and special loop-carried variable dependencies complicate matters further and are examined in this paper.Item Open Access An algorithmic implementation of expert object recognition in ventral visual pathway(Colorado State University. Libraries, 2002) Baek, Kyungim, authorUnderstanding the mechanisms underlying visual object recognition has been an important subject in both human and machine vision since the early days of cognitive science. Current state-of-the-art machine vision systems can perform only rudimentary tasks in highly constrained situations compared to the powerful and flexible recognition abilities of the human visual system. In this work, we provide an algorithmic analysis of psychological and anatomical models of the ventral visual pathway, more specifically the pathway that is responsible for expert object recognition, using the current state of machine vision technology. As a result, we propose a biologically plausible expert object recognition system composed of a set of distinct component subsystems performing feature extraction and pattern matching. The proposed system is evaluated on four different multi-class data sets, comparing the performance of the system as a whole to the performance of its component subsystems alone. The results show that the system matches the performance of state-of-the-art machine vision techniques on uncompressed data, and performs better when the stored data is highly compressed. Our work on building an artificial vision system based on biological models and theories not only provides a baseline for building more complex, end-to-end vision systems, but also facilitates interactions between computational and biological vision studies by providing feedback to both communities.Item Open Access Machine learned boundary definitions for an expert's tracing assistant in image processing(Colorado State University. Libraries, 2003) Crawford-Hines, Stewart, author; Anderson, Charles W., advisor; Draper, Bruce A. (Bruce Austin), 1962-, committee member; Beveridge, J. Ross, committee member; Alciatore, David G., committee memberMost image processing work addressing boundary definition tasks embeds the assumption that an edge in an image corresponds to the boundary of interest in the world. In straightforward imagery this is true, however it is not always the case. There are images in which edges are indistinct or obscure, and these images can only be segmented by a human expert. The work in this dissertation addresses the range of imagery between the two extremes of those straightforward images and those requiring human guidance to appropriately segment. By freeing systems of a priori edge definitions and building in a mechanism to learn the boundary definitions needed, systems can do better and be more broadly applicable. This dissertation presents the construction of such a boundary-learning system and demonstrates the validity of this premise on real data. A framework was created for the task in which expert-provided boundary exemplars are used to create training data, which in turn are used by a neural network to learn the task and replicate the expert's boundary tracing behavior. This is the framework for the Expert's Tracing Assistant (ETA) system. For a representative set of nine structures in the Visible Human imagery, ETA was compared and contrasted to two state-of-the-art, user guided methods--Intelligent Scissors (IS) and Active Contour Models (ACM). Each method was used to define a boundary, and the distances between these boundaries and an expert's ground truth were compared. Across independent trials, there will be a natural variation in an expert's boundary tracing, and this degree of variation served as a benchmark against which these three methods were compared. For simple structural boundaries, all the methods were equivalent. However, in more difficult cases, ETA was shown to significantly better replicate the expert's boundary than either IS or ACM. In these cases, where the expert's judgement was most called into play to bound the structure, ACM and IS could not adapt to the boundary character used by the expert while ETA could.Item Open Access Empirical modeling and analysis of local search algorithms for the job-shop scheduling problem(Colorado State University. Libraries, 2003) Watson, Jean-Paul, author; Howe, Adele E., advisor; Whitley, L. Darrell, advisor; Chong, Edwin Kah Pin, committee member; Bohm, Anton Willem, committee memberLocal search algorithms are among the most effective approaches for locating high-quality solutions to a wide range of combinatorial optimization problems. However, our theoretical understanding of these algorithms is very limited, leading to significant problems for both researchers and practitioners. Specifically, the lack of a theory of local search impedes the development of more effective algorithms, prevents practitioners from identifying the algorithm most appropriate for a given problem, and allows widespread conjecture and misinformation regarding the benefits and/or drawbacks of particular algorithms. This thesis represents a significant step toward a theory of local search. Using empirical methods, we develop theoretical models of the behavior of four well-known local search algorithms: a random walk, tabu search, iterated local search, and simulated annealing. The analysis proceeds in the context of the well-known job-shop scheduling problem, one of the most difficult NP-hard problems encountered in practice. The large volume of prior research on the job-shop scheduling problem provides a diverse range of available algorithms and problem instances, in addition to numerous empirical observations regarding local search algorithm behavior; the latter are used to validate our behavioral models. We show that all four local search algorithms can be modeled with high fidelity using straightforward variations of a generalized one-dimensional Markov chain. The states in these models represent sets of solutions a given fixed distance from the nearest optimal solution. The transition probabilities in all of the models are remarkably similar, in that search is consistently biased toward solutions that are roughly equidistant from the nearest optimal solution and solutions that are maximally distant from the nearest optimal solution. Surprisingly, the qualitative form of the transition probabilities is simply due to the structure of the representation used to encode solutions: the binary hypercube. The models account for between 96% and 99% of the variability in the cost required to locate both optimal and sub-optimal solutions to a wide range of problem instances, and provide explanations for numerous phenomena related to problem difficulty for local search in the job-shop scheduling problem. In the course of our analysis, we also disprove many conjectures regarding the behavior and benefits of particular algorithms. Our research indicates that despite their effectiveness, local search algorithms for the job-shop scheduling problem exhibit surprisingly simple run-time dynamics. Further, we observe minimal differences between the dynamical behavior of different algorithms. As expected given similar run-time dynamics, although contrary to numerous reports appearing in the literature, we also show that the performance of different algorithms is largely indistinguishable. Ultimately, our behavioral models serve to unify and provide explanations for a large body of observations regarding problem difficulty for local search in the job-shop scheduling problem, and identify new research areas for the development of more effective local search algorithms.Item Open Access A meta-modeling approach to specifying patterns(Colorado State University. Libraries, 2004) Kim, Dae-Kyoo, author; France, Robert B., advisor; Bieman, James M., committee member; Ghosh, Sudipto, committee member; Turk, Daniel E., committee memberA major goal in software development is to produce quality products in less time and with less cost. Systematic reuse of software artifacts that encapsulate high-quality development experience can help one achieve the goal. Design patterns are a common form of reusable design experience that can help developers reduce development time. Prevalent design patterns are, however, described informally (e.g., [35]). This prevents systematic use of patterns. The research documented in this dissertation is aimed at developing a practical pattern specification technique that supports the systematic use of patterns during design modeling. A pattern specification language called the Role-Based Metamodeling Language (RBML) was developed as part of this research. The RBML specifies a pattern as a specialization of the UML metamodel. The RBML uses the Unified Modeling Language (UML) as a syntactic base to enable the use of UML modeling tools for creating and evolving pattern specifications. We used the RBML to develop specifications for design patterns in the Design Patterns book [35] including Abstract Factory, Bridge, Decorator, Observer, State, Iterator, and Visitor. We also used the RBML to define a large application domain pattern for checkin-checkout systems, and used the resulting specification to develop UML designs for a library system and a car rental system. In addition, we used the RBML to specify access control mechanisms as patterns including Role-Based Access Control (RBAC), Mandatory Access Control (MAC), and a Hybrid Access Control (HAC) that is a composition of RBAC and MAC. The RBML is currently used at NASA for modeling pervasive requirements as patterns. NASA also uses the RBML in the development of Weather CTAS System that is a weather forecasting system. To determine the potential of the RBML to support the development of tools that enable systematic use of patterns, we developed a prototype tool called RBMLPattern Instantiator (RBML-PI) that generates conforming UML models from RBML pattern specifications.Item Open Access Compiling dataflow graphs into hardware(Colorado State University. Libraries, 2005) Rinker, Robert E., author; Najjar, Walid, advisor; Böhm, Wim, committee member; Grit, Dale H., committee member; Jayasumana, Anura P., committee memberConventional computers are programmed by supplying a sequence of instructions that perform the desired task. A reconfigurable processor is "programmed" by specifying the interconnections between hardware components, thereby creating a "hardwired" system to do the particular task. For some applications such as image processing, reconfigurable processors can produce dramatic execution speedups. However, programming a reconfigurable processor is essentially a hardware design discipline, making programming difficult for application programmers who are only familiar with software design techniques. To bridge this gap, a programming language, called SA-C (Single Assignment C, pronounced "sassy"), has been designed for programming reconfigurable processors. The process involves two main steps - first, the SA-C compiler analyzes the input source code and produces a hardware-independent intermediate representation of the program, called a dataflow graph (DFG). Secondly, this DFG is combined with hardware-specific information to create the final configuration. This dissertation describes the design and implementation of a system that performs the DFG to hardware translation. The DFG is broken up into three sections: the data generators, the inner loop body, and the data collectors. The second of these, the inner loop body, is used to create a computational structure that is unique for each program. The other two sections are implemented by using prebuilt modules, parameterized for the particular problem. Finally, a "glue module" is created to connect the various pieces into a complete interconnection specification. The dissertation also explores optimizations that can be applied while processing the DFG, to improve performance. A technique for pipelining the inner loop body is described that uses an estimation tool for the propagation delay of the nodes within the dataflow graph. A scheme is also described that identifies subgraphs with the dataflow graph that can be replaced with lookup tables. The lookup tables provide a faster implementation than random logic in some instances.Item Open Access A systematic approach to testing UML designs(Colorado State University. Libraries, 2006) Dinh-Trong, Trung T., author; France, Robert B., advisor; Ghosh, Sudipto, advisor; Bieman, James M., committee member; Malaiya, Yashwant K., committee member; Fan, Chuen-mei, committee memberIn Model Driven Engineering (MDE) approaches, developers create and refine design models from which substantial portions of implementations are generated. During refinement, undetected faults in abstract model can traverse into the refined model, and eventually into code. Hence, finding and removing faults in design models is essential for MDE approaches to succeed. This dissertation describes approach to finding faults in design models created using the Unified Modeling Language (UML). Executable forms of UML design models are exercised using generated test inputs that provide coverage with respect to UML-based coverage criteria. The UML designs that are tested consist of class diagrams, sequence diagrams and activity diagrams. The contribution of the dissertation includes (1) a test input generation technique, (2) an approach to execute design models describing sequential behavior with test inputs in order to detect faults, and (3) a set of pilot studies that are carried out to explore the fault detection capability of our testing approach. The test input generation technique involves analyzing design models under test to produce test inputs that satisfy UML sequence diagram coverage criteria. We defined a directed graph structure, named Variable Assignment Graph (VAG), to generate test inputs. The VAG combines information from class and sequence diagrams. Paths are selected from the VAG and constraints are identified to traverse the paths. The constraints are then solved with a constraint solver. The model execution technique involves transforming each design under test into an executable from, which is exercised with the general inputs. Failures are reported if the observed behavior differs from the expected behavior. We proposed an action language, named Java-like Action Language (JAL), that supports the UML action semantics. We developed a prototype tool, named UMLAnT, that performs test execution and animation of design models. We performed pilot studies to evaluate the fault detection effectiveness of our approach. Mutation faults and commonly occurring faults in UML models created by students in our software engineering courses were seeded in three design models. Ninety percent of the seeded faults were detected using our approach.Item Open Access A systematic approach to testing UML designs(Colorado State University. Libraries, 2007) Dinh-Trong, Trung T., author; France, Robert B., advisor; Ghosh, Sudipto, advisorIn Model Driven Engineering (MDE) approaches, developers create and refine design models from which substantial portions of implementations are generated. During refinement, undetected faults in an abstract model can traverse into the refined models, and eventually into code. Hence, finding and removing faults in design models is essential for MDE approaches to succeed. This dissertation describes a testing approach to finding faults in design models created using the Unified Modeling Language (UML). Executable forms of UML design models are exercised using generated test inputs that provide coverage with respect to UML-based coverage criteria. The UML designs that are tested consist of class diagrams, sequence diagrams and activity diagrams. The contribution of the dissertation includes (1) a test input generation technique, (2) an approach to execute design models describing sequential behavior with test inputs in order to detect faults, and (3) a set of pilot studies that are carried out to explore the fault detection capability of our testing approach. The test input generation technique involves analyzing design models under test to produce test inputs that satisfy UML sequence diagram coverage criteria. We defined a directed graph structure, named Variable Assignment Graph (VAG), to generate test inputs. The VAG combines information from class and sequence diagrams. Paths are selected from the VAG and constraints are identified to traverse the paths. The constraints are then solved with a constraint solver. The model execution technique involves transforming each design under test into an executable form, which is exercised with the generated inputs. Failures are reported if the observed behavior differs from the expected behavior. We proposed an action language, named Java-like Action Language (JAL), that supports the UML action semantics. We developed a prototype tool, named UMLAnT, that performs test execution and animation of design models. We performed pilot studies to evaluate the fault detection effectiveness of our approach. Mutation faults and commonly occurring faults in UML models created by students in our software engineering courses were seeded in three design models. Ninety percent of the seeded faults were detected using our approach.Item Open Access Assessing vulnerabilities in software systems: a quantitative approach(Colorado State University. Libraries, 2007) Alhazmi, Omar, author; Malaiya, Yashwant K., advisor; Ray, Indrajit, advisorSecurity and reliability are two of the most important attributes of complex software systems. It is now common to use quantitative methods for evaluating and managing reliability. Software assurance requires similar quantitative assessment of software security, however only limited work has been done on quantitative aspects of security. The analogy with software reliability can help developing similar measures for software security. However, there are significant differences that need to be identified and appropriately acknowledged. This work examines the feasibility of quantitatively characterizing major attributes of security using its analogy with reliability. In particular, we investigate whether it is possible to predict the number of vulnerabilities that can potentially be identified in a current or future release of a software system using analytical modeling techniques.Item Open Access Vulnerability discovery in multiple version software systems: open source and commercial software systems(Colorado State University. Libraries, 2007) Kim, Jin Yoo, author; Malaiya, Yashwant K., advisor; Jayasumana, Anura P., committee member; Ray, Indrakshi, committee memberThe vulnerability discovery process for a program describes the rate at which the vulnerabilities are discovered. A model of the discovery process can be used to estimate the number of vulnerabilities likely to be discovered in the near future. Past studies have considered vulnerability discovery only for individual software versions, without considering the impact of shared code among successive versions and the evolution of source code. These affecting factors in vulnerability discovery process need to be taken into account estimate the future software vulnerability discovery trend more accurately. This thesis examines possible approaches for taking these factors into account in the previous works. We implemented these factors on vulnerability discovery process. We examine a new approach for quantitatively vulnerability discovery process, based on shared source code measurements among multiple version software system. The applicability of the approach is examined using Apache HTTP Web server and Mysql DataBase Management System (DBMS). The result of this approach shows better goodness of fit than fitting result in the previous researches. Using this revised software vulnerability discovery process, the superposition effect which is an unexpected vulnerability discovery in the previous researches could be determined by software discovery model. The multiple software vulnerability discovery model (MVDM) shows that vulnerability discovery rate is different with single vulnerability discovery model's (SVDM) discovery rate because of newly considered factors. From these result, we create and applied new SVDM for open source and commercial software. This single vulnerability process is examined, and the model testing result shows that SVDM can be an alternative modeling. The modified vulnerability discovery model will be presented for supporting previous researches' weakness, and the theoretical modeling will be discuss for more accurate explanation.Item Open Access Dimensionality reduction and classification of time embedded EEG signals(Colorado State University. Libraries, 2007) Teli, Mohammad Nayeem, author; Anderson, Charles W., advisor; McConnell, Ross, committee member; Kirby, Michael, 1961-, committee memberElectroencephalogram (EEG) is the measurement of the electrical activity of the brain measured by placing electrodes on the scalp. These EEG signals give the micro-voltage difference between different parts of the brain in a non-invasive manner. The brain activity measured in this way is being currently analyzed for a possible diagnosis of physiological and psychiatric diseases. These signals have also found a way into cognitive research. At Colorado State University we are trying to investigate the use of EEG as computer input. In this particular research our goal is to classify two mental tasks. A subject is asked to think about a mental task and the EEG signals are measured using six electrodes on his scalp. In order to differentiate between two different tasks, the EEG signals produced by each task need to be classified. We hypothesize that a bottleneck neural network would help us to classify EEG data much better than classification techniques like Linear Discriminant Analysis(LDA), Quadratic Discriminant Analysis (QDA), and Support Vector Machines. A five layer bottleneck neural network is trained using a fast convergence algorithm (variation of Levenberg-Marquardt algorithm) and Scaled Conjugate Gradient (SCG). Classification is compared between a neural network, LDA, QDA and SVM for both raw EEG data as well as bottleneck layer output. Results indicate that QDA and SVM do better classification of raw EEG data without a bottleneck network. QDA and SVM always achieved higher classification accuracy than the neural network with a bottleneck layer in all our experiments. Neural network was able to achieve its best classification accuracy of 92% of test samples correctly classified, whereas QDA achieved 100% accuracy in classifying the test data.Item Open Access An aspect-based approach to modeling access control policies(Colorado State University. Libraries, 2007) Song, Eunjee, author; France, Robert B., advisor; Ray, Indrakshi, advisor; Bieman, James M., committee member; Ghosh, Sudipto, committee member; Kim, Joon K., committee memberAccess control policies determine how sensitive information and computing resources are to be protected. Enforcing these policies in a system design typically results in access control features that crosscut the dominant structure of the design (that is, features that are spread across and intertwined with other features in the design). The spreading and intertwining of access control features make it difficult to understand, analyze, and change them and thus complicate the task of ensuring that an evolving design continues to enforce access control policies. Researchers have advocated the use of aspect-oriented modeling (AOM) techniques for addressing the problem of evolving crosscutting features. This dissertation proposes an approach to modeling and analyzing crosscutting access control features. The approach utilizes AOM techniques to isolate crosscutting access control features as patterns described by aspect models. Incorporating an access control feature into a design involves embedding instantiated forms of the access control pattern into the design model. When composing instantiated access control patterns with a design model, one needs to ensure that the resulting composed model enforces access control policies. The approach includes a technique to verify that specified policies are enforced in the composed model. The approach is illustrated using two well-known access control models: the Role- Based Access Control (RBAC) model and the Bell-LaPadula (BLP) model. Features that enforce RBAC and BLP models are described by aspect models. We show how the aspect models can be composed to create a new hybrid access control aspect model. We also show how one can verify that composition of a base (primary) design model and an aspect model that enforces specified policies produces a composed model in which the policies are still enforced.Item Open Access Scalable and efficient tools for multi-level tiling(Colorado State University. Libraries, 2008) Renganarayana, Lakshminarayanan, author; Rajopadhye, Sanjay, advisorIn the era of many-core systems, application performance will come from parallelism and data locality. Effective exploitation of these require explicit (re)structuring of the applications. Multilevel (or hierarchical) tiling is one such structuring technique used in almost all high-performance implementations. Lack of tool support has limited the use of multi-level tiling to program optimization experts. We present solutions to two fundamental problems in multi-level tiling, viz., optimal tile size selection and parameterized tiled loop generation. Our solutions provide scalable and efficient tools for multi-level tiling. Parameterized tiled code refers to tiled loops where the tile sizes are not (fixed) compile-time constants but are left as symbolic parameters. It can enable selection and adaptation of tile sizes across a spectrum of stages through compilation to run-time. We introduce two polyhedral sets, viz., inset and outset, and use them to develop a variety of scalable and efficient multi-level tiled loop generation algorithms. The generation efficiency and code quality are demonstrated on a variety of benchmarks such as stencil computations and matrix subroutines from BLAS. Our technique can generate tiled loop nests with parameterized, fixed or mixed tile sizes, thereby providing a one-size-fits all solution ideal for inclusion in production compilers. Optimal tile size selection (TSS) refers to the selection of tile sizes that optimize some cost (e.g., execution time) model. We show that these cost models share a fundamental mathematical property, viz., positivity, that allows us to reduce optimal TSS to convex optimization problems. Almost all TSS models proposed in the literature for parallelism, caches, and registers, lend themselves to this reduction. We present the reduction of five different TSS models proposed in the literature by different authors in a variety of tiling contexts. Our convex optimization based TSS framework is the first one to provide a solution that is both efficient and scalable to multiple levels of tiling.Item Open Access A vector model of trust to reason about trustworthiness of entities for developing secure systems(Colorado State University. Libraries, 2008) Chakraborty, Sudip, author; Ray, Indrajit, advisor; Ray, Indrakshi, advisorSecurity services rely to a great extent on some notion of trust. In all security mechanisms there is an implicit notion of trustworthiness of the involved entities. Security technologies like cryptographic algorithms, digital signature, access control mechanisms provide confidentiality, integrity, authentication, and authorization thereby allow some level of 'trust' on other entities. However, these techniques provide only a restrictive (binary) notion of trust and do not suffice to express more general concept of 'trustworthiness'. For example, a digitally signed certificate does not tell whether there is any collusion between the issuer and the bearer. In fact, without a proper model and mechanism to evaluate and manage trust, it is hard to enforce trust-based security decisions. Therefore there is a need for more generic model of trust. However, even today, there is no accepted formalism for specifying and reasoning with trust. Secure systems are built under the premise that concepts like "trustworthiness" or "trusted" are well understood, without agreeing to what "trust" means, what constitutes trust, how to measure it, how to compare or compose two trusts, and how a computed trust can help to make a security decision.Item Open Access Stability analysis of recurrent neural networks with applications(Colorado State University. Libraries, 2008) Knight, James N., author; Anderson, Charles W., advisorRecurrent neural networks are an important tool in the analysis of data with temporal structure. The ability of recurrent networks to model temporal data and act as dynamic mappings makes them ideal for application to complex control problems. Because such networks are dynamic, however, application in control systems, where stability and safety are important, requires certain guarantees about the behavior of the network and its interaction with the controlled system. Both the performance of the system and its stability must be assured. Since the dynamics of controlled systems are never perfectly known, robust control requires that uncertainty in the knowledge of systems be explicitly addressed. Robust control synthesis approaches produce controllers that are stable in the presence of uncertainty. To guarantee robust stability, these controllers must often sacrifice performance on the actual physical system. The addition of adaptive recurrent neural network components to the controller can alleviate, to some extent, the loss of performance associated with robust design by allowing adaptation to observed system dynamics. The assurance of stability of the adaptive neural control system is prerequisite to the application of such techniques. Work in [49, 2] points toward the use of modern stability analysis and robust control techniques in combination with reinforcement learning algorithms to provide adaptive neural controllers with the necessary guarantees of performance and stability. The algorithms developed in these works have a high computational burden due to the cost of the online stability analysis. Conservatism in the stability analysis of the adaptive neural components has a direct impact on the cost of the proposed system. This is due to an increase in the number of stability analysis computations that must be made. The work in [79, 82] provides more efficient tools for the analysis of time-varying recurrent neural network stability than those applied in [49, 2]. Recent results in the analysis of systems with repeated nonlinearities [19, 52, 17] can reduce the conservatism of the analysis developed in [79] and give an overall improvement in the performance of the on-line stability analysis. In this document, steps toward making the application of robust adaptive neural controllers practical are described. The analysis of recurrent neural network stability in [79] is not exact and reductions in the conservatism and computational cost of the analysis are presented. An algorithm is developed allowing the application of the stability analysis results to online adaptive control systems. The algorithm modifies the recurrent neural network updates with a bias away from the boundary between provably stable parameter settings and possibly unstable settings. This bias is derived from the results of the stability analysis, and its method of computation is applicable to a broad class of adaptive control systems not restricted to recurrent neural networks. The use of this bias term reduces the number of expensive stability analysis computations that must be made and thus reduces the computational complexity of the stable adaptive system. An application of the proposed algorithm to an uncertain, nonlinear, control system is provided and points toward future work on this problem that could further the practical application of robust adaptive neural control.Item Open Access Exploring the bias of direct search and evolutionary optimization(Colorado State University. Libraries, 2008) Lunacek, Monte, author; Whitley, Darrell, advisorThere are many applications in science that yield the following optimization problem: given an objective function, which set of input decision variables produce the largest or smallest result? Optimization algorithms attempt to answer this question by searching for competitive solutions within an application's domain. But every search algorithm has some particular bias. Our results show that search algorithms are more effective when they cope with the features that make a particular application difficult. Evolutionary algorithms are stochastic population-based search methods that are often designed to perform well on problems containing many local optima. Although this is a critical feature, the number of local optima in the search space is not necessarily indicative of problem difficulty. The objective of this dissertation is to investigate how two relatively unexplored problem features, ridges and global structure, impact the performance of evolutionary parameter optimization. We show that problems containing these features can cause evolutionary algorithms to fail in unexpected ways. For example, the condition number of a problem is one way to quantify a ridge feature. When a simple unimodal surface has a high condition number, we show that the resulting narrow ridge can make many evolutionary algorithms extremely inefficient. Some even fail. Similarly, funnels are one way of categorizing a problem's global structure. A single-funnel problem is one where the local optima are clustered together such that there exists a global trend toward the best solution. This trend is less predicable on problems that contain multiple funnels. We describe a metric that distinguishes problems based on this characteristic. Then we show that the global structure of the problem can render successful global search strategies ineffective on relatively simple multi-modal surfaces. Our proposed strategy that performs well on problems with multiple funnels is counter-intuitive. These issues impact two real-world applications: an atmospheric science inversion model and a configurational chemistry problem. We find that exploiting ridges and global structure results in more effective solutions on these difficult real-world problems. This adds integrity to our perspective on how problem features interact with search algorithms, and more clearly exposes the bias of direct search and evolutionary algorithms.