Browsing by Author "Duff, William S., advisor"
Now showing 1 - 7 of 7
Results Per Page
Sort Options
Item Open Access A tabu search evolutionary algorithm for multiobjective optimization: application to a bi-criterion aircraft structural reliability problem(Colorado State University. Libraries, 2015) Long, Kim Chenming, author; Duff, William S., advisor; Labadie, John W., advisor; Stansloski, Mitchell, committee member; Chong, Edwin K. P., committee member; Sampath, Walajabad S., committee memberReal-world engineering optimization problems often require the consideration of multiple conflicting and noncommensurate objectives, subject to nonconvex constraint regions in a high-dimensional decision space. Further challenges occur for combinatorial multiobjective problems in which the decision variables are not continuous. Traditional multiobjective optimization methods of operations research, such as weighting and epsilon constraint methods, are ill-suited to solving these complex, multiobjective problems. This has given rise to the application of a wide range of metaheuristic optimization algorithms, such as evolutionary, particle swarm, simulated annealing, and ant colony methods, to multiobjective optimization. Several multiobjective evolutionary algorithms have been developed, including the strength Pareto evolutionary algorithm (SPEA) and the non-dominated sorting genetic algorithm (NSGA), for determining the Pareto-optimal set of non-dominated solutions. Although numerous researchers have developed a wide range of multiobjective optimization algorithms, there is a continuing need to construct computationally efficient algorithms with an improved ability to converge to globally non-dominated solutions along the Pareto-optimal front for complex, large-scale, multiobjective engineering optimization problems. This is particularly important when the multiple objective functions and constraints of the real-world system cannot be expressed in explicit mathematical representations. This research presents a novel metaheuristic evolutionary algorithm for complex multiobjective optimization problems, which combines the metaheuristic tabu search algorithm with the evolutionary algorithm (TSEA), as embodied in genetic algorithms. TSEA is successfully applied to bicriteria (i.e., structural reliability and retrofit cost) optimization of the aircraft tail structure fatigue life, which increases its reliability by prolonging fatigue life. A comparison for this application of the proposed algorithm, TSEA, with several state-of-the-art multiobjective optimization algorithms reveals that TSEA outperforms these algorithms by providing retrofit solutions with greater reliability for the same costs (i.e., closer to the Pareto-optimal front) after the algorithms are executed for the same number of generations. This research also demonstrates that TSEA competes with and, in some situations, outperforms state-of-the-art multiobjective optimization algorithms such as NSGA II and SPEA 2 when applied to classic bicriteria test problems in the technical literature and other complex, sizable real-world applications. The successful implementation of TSEA contributes to the safety of aeronautical structures by providing a systematic way to guide aircraft structural retrofitting efforts, as well as a potentially useful algorithm for a wide range of multiobjective optimization problems in engineering and other fields.Item Open Access Investigation of a nonlinear controller that combines steady state predictions with integral action(Colorado State University. Libraries, 2010) Hodgson, David A., author; Duff, William S., advisor; Young, Peter M., advisor; Olsen, Daniel B., committee member; Anderson, Charles W., committee memberCross-flow water-to-air heat exchangers are a common element in heating ventilating and air conditioning (HVAC) systems. In a typical configuration the outlet air temperature is controlled by the flow rate of water through the coil. In this configuration the heat exchanger exhibits non-linear dynamics. In particular the system has variable gain. Variable gain presents a challenge for the linear controllers that are typically used to control the outlet air temperature. To ensure stability over the entire operating range controllers need to be tuned at the highest gain state. This leads to sluggish response in lower gain states. Previous research has shown the use of steady state predictions of the flow rate needed to produce zero steady state error has improved the transient response of a heat exchanger. In this project a nonlinear controller that provides smooth mixing between steady state predictions and integral control was introduced. Bounds for the steady state error introduced by the controller were theoretically derived and experimentally verified. The controller outperformed a properly tuned nominal PI controller for both input tracking and disturbance rejection.Item Open Access Metaheuristic approach to solving U-shaped assembly line balancing problems using a rule-base coded genetic algorithm(Colorado State University. Libraries, 2015) Martinez-Contreras, Ulises, author; Duff, William S., advisor; Troxell, Wade O., committee member; Labaide, John W., committee member; Sampath, Walajabad S., committee memberThe need to achieve line balancing for a U-shaped production line to minimize production time and cost is a problem frequently encountered in industry. This research presents an efficient and quick algorithm to solve the U-shape line-balancing problem. Heuristic rules used to solve a straight line-balancing problem (LBP) were modified and adapted so they could be applied in a U-shape line-balancing problem model. By themselves, the heuristic rules, which were adapted from straight-line systems, can produce good solutions for the U-shape LBP, however, there is nothing that guarantees that this will be the case. One way to achieve improved solutions using heuristic rules can be accomplished by using a number of rules simultaneously to break ties during the task assignment process. In addition to the use of heuristic and simultaneous heuristic rules, basic genetic operations were used to further improve the performance of the assignment process and thus obtain better solutions. Two genetic algorithms are introduced in this research: a direct-coded and an indirect-coded model. The newly introduced algorithms were compared with well-known problems from literature and their performance as compared to other heuristic approaches showed that they perform well. The indirect-coded genetic algorithm uses the adapted heuristic rules from the LBP as genes to find the solutions to the problem. In the direct-coded algorithm, each gene represents an operation in the LBP and the position of the gene in the chromosome represents the order in which an operation, or task, will be assigned to a workstation. The indirect-coded genetic algorithm introduces sixteen heuristic rules adapted from the straight LBP for use in a U-shape LBP. Each heuristic rule was represented inside the chromosome as a gene. The rules were implemented in a way that precedence is preserved and at the same time, facilitate the use of genetic operations. Comparing the algorithm’s results with known results from literature, it obtained better solutions in 26% of the cases; it obtained an equivalent solution in 62% of the cases (not better, not worse); and a worse solution the remaining 12%. The direct-coded genetic algorithm introduces a new way to construct an ordered arrangement of the task assignation without violating any precedence. This method consists of creating a diagram that is isomorphic to the original precedence diagram to facilitate the construction of the chromosome. Also, crossover and mutation operations are conducted in a way that precedence relations are not violated. The direct-coded genetic algorithm was tested with the same set of problems as the indirect-coded algorithm. It obtained better solutions than the known solutions from literature in 22% of the cases; 72% of the problems had an equivalent solution; and 6% of the time it generated a solution less successful than the solution from literature. Something that had not been used in other genetic algorithm studies is a response surface methodology to optimize the levels for the parameters that are involved in the response model. The response surface methodology is used to find the best values for the parameters (% of children, % of mutations, number of genes, number of chromosomes) to produce good solutions for problems of different sizes (large, medium, small). This allows for the best solution to be obtained in a minimum amount of time, thus saving computational effort. Even though both algorithms produce good solutions, the direct-coded genetic algorithm option requires less computational effort. Knowing the capabilities of genetic algorithms, they were then tested in two real industry problems to improve assembly-line functions. This resulted in increased efficiency in both production lines.Item Open Access Optimization of a centrifugal electrospinning process using response surface methods and artificial neural networks(Colorado State University. Libraries, 2014) Greenawalt, Frank E., author; Duff, William S., advisor; Bradley, Thomas H., committee member; Labadie, John W., committee member; Popat, Ketul C., committee memberFor complex system designs involving a large number of process variables, models are typically created for evaluating the system behavior for various operating conditions. These models are useful in understanding the effect that various process variables have on the process response(s). Design of Experiments (DOE) and Response Surface Methodology (RSM) are typically used together as an effective approach to optimize a process. RSM and DOE commonly employ first and second order algebraic models. Artificial Neural Networks (ANN) is a more recently developed modeling approach. An evaluation of these three approaches is made in conjunction with experimentation on a newly developed centrifugal electrospinning prototype. The centrifugal electrospinning process is taken from the exploratory design phase through the pre-production phase to determine optimized manufacturing operating conditions. Centrifugal Electrospinning is a sub platform technology to electrospinning for producing nanofibrous materials with a high surface to volume ratio, significant fiber interconnectivity and microscale interstitial spaces. [131] Centrifugal electrospinning is a potentially more cost effective advanced technology which evolved from traditional electrospinning. Despite there being a substantial amount of research in centrifugal electrospinning, there are still many aspects of this complex process that are not well understood. This study started with researching and developing a functional centrifugal electrospinning prototype test apparatus which, through patent searches, was found to be innovative in nature. Once a functional test apparatus was designed, an exploration of the process parameter settings was conducted to locate an experimental setup condition where the process was able to produce acceptable sub-micron polymeric fibers. At this point, the traditional RSM/DOE approach was used to find a setting point that produced a media efficiency value that was close to optimal. An Artificial Neural Network architecture was then developed with the goal of building a model that accurately predicts response surface values. The ANN model was then used to predict responses in place of experimentation on the prototype in the RSM/DOE optimization process. Different levels of use of the ANN were then formulated using the RSM/DOE and ANN to investigate its potential advantages in terms of time, and cost effectiveness to the overall optimization approach. The development of an innovative centrifugal electrospinning process was successful. A new electrospinning design was developed from the research. A patent application is currently pending on the centrifugal electrospinning applicator developed from this research. Near optimum operating settings for the prototype were found. Typically there is a substantial expense associated with evolving a well-designed prototype and experimentally investigating a new process. The use of ANN with RSM/DOE in the research was seen to reduce this expense while identifying settings close to those found when using RSM/DOE with experimentation alone. This research also provides insights into the effectiveness of the RSM/DOE approach in the context of prototype development and provides insights into how different combinations of RSM/DOE and ANN may be applied to complex processes.Item Open Access Performance and reliability evaluation of Sacramento demonstration novel ICPC solar collectors(Colorado State University. Libraries, 2012) Daosukho, Jirachote "Pong", author; Duff, William S., advisor; Troxell, Wade O., advisor; Burns, Patrick J., committee member; Breidt, F. Jay, committee memberThis dissertation focuses on the reliability and degradation of the novel integral compound parabolic concentrator (ICPC) evacuated solar collector over a 13 year period. The study investigates failure modes of the collectors and analyzes the effects of those failures on performance. An instantaneous efficiency model was used to calculate performance and efficiencies from the measurements. An animated graphical ray tracing simulation tool was developed to investigate the optical performance of the ICPC for the vertical and horizontal absorber fin orientations. The animated graphical ray tracing allows the user to visualize the propagation of rays through the ICPC optics. The ray tracing analysis also showed that the horizontal fin ICPC's performance was more robust to degradation of the reflective surface. Thermal losses were also a part of the performance calculations. The two main degradation mechanisms are reflectivity degradation due to air leakage and fluid leakage into the vacuum enclosure and loss of vacuum due to leaks through cracks. Reflectivity degradation causes a reduction of optical performance and the loss of vacuum causes a reduction in thermal performance.Item Open Access Second-order sub-array Cartesian product split-plot design(Colorado State University. Libraries, 2015) Cortés-Mestres, Luis A., author; Duff, William S., advisor; Simpson, James R., advisor; Chong, Edwin K. P., committee member; Bradley, Thomas H., committee member; Jathar, Shantanu H., committee memberFisher (1926) laid down the fundamental principles of design of experiments: factorization, replication, randomization, and local control of error. In industrial experiments, however, departure from these principles is commonplace. Many industrial experiments involve situations in which complete randomization may not be feasible because the factor level settings are impractical or inconvenient to change, the resources available to complete the experiment in homogenous settings are limited, or both. Restricted randomization due to factor levels that are impractical or inconvenient to change can lead to a split-plot experiment. Restricted randomization due to resource limitation can lead to blocking. Situations that require fitting a second-order model under those conditions lead to a second-order block split-plot experiment. Although response surface methodology has experienced a phenomenal growth since Box and Wilson (1951), the departure from standard methods to tackle second-order block split-plot design remains, for the most part, unexplored. Most graduate textbooks only provide a relatively basic treatise of the subject. Peer-reviewed literature is scarce, has a limited number of examples, and provides guidelines that often are too general. This deficit of information leaves practitioners ill prepared to face the roadblocks illuminated by Simpson, Kowalski, and Landman (2004). Practical strategies to help practitioners in dealing with the challenges presented by second-order block split-plot design are provided, including an end-to-end, innovative approach for the construction of a new form of effective and efficient response surface design referred to as second-order sub-array Cartesian product split-plot design. This new form of design is an alternative to ineffective split-plot designs that are currently in use by the manufacturing and quality control community. The design is economical, the prediction variance of the regression coefficients is low and stable, and the aliasing between the terms in the model and effects that are not in the model as well as the correlation between similar effects that are not in the model is low. Based on an assessment using well-accepted key design evaluation criterion, it is demonstrated that second-order sub-array Cartesian product split-plot designs perform as well or better than historical designs that have been considered standards up to this point.Item Open Access The bids-evaluation decision model development and application for PPP transport projects: a project risks modeling framework(Colorado State University. Libraries, 2010) Jang, Guan-wei, author; Duff, William S., advisor; Alciatore, David G., committee member; Labadie, John W., committee member; Puttlitz, Christian Matthew, committee memberPublic-private partnership (PPP) infrastructure projects play a key role in economic growth. Value for money (VFM), a core objective when conducting PPP projects, is defined as the optimal combination of whole life costs and benefits of the project to meet user requirements. The PPP infrastructure projects are generally very complex and have highly dynamic, interdependent risks and uncertainties that occur over the life cycle of a project. By using PPP arrangements, experts transfer and allocate risks to the party who is most capable of managing them in a cost effective manner. This requires the optimization of risk allocation between the public and private sectors in order to achieve the best VFM. Risk assessment is a critical element when selecting a project partner and examining projected VFM performance. Unfortunately, the current contractor selection methods used in the industry do not address interdependently dynamic and non-linear risk interactions. Such methods are unable to address unstructured or even semi-structured real world problems. By using these methods, experts often lack the global perspectives of project life cycles and ignore the uncertainty of project performance outcomes. This researcher developed a theoretical approach which applied hybrid techniques to a bidding proposal selection model from the public perspective. Using System Dynamics modeling and relevant statistical techniques, the dynamic risk interactions and interdependencies over project construction and operation phases were addressed and quantified. By employing Monte Carlo simulation, this researcher estimated the probability distribution of the overall project net present value (NPV) with compounding both downside and beneficial effects over project construction and operation phases. By applying appropriate decision making methods to compare the probability distribution of NPV among the bidding proposals, a capable contractor can be selected.