Browsing by Author "van de Lindt, John, committee member"
Now showing 1 - 14 of 14
- Results Per Page
- Sort Options
Item Open Access A new hurricane impact level ranking system using artificial neural networks(Colorado State University. Libraries, 2015) Pilkington, Stephanie F., author; Mahmoud, Hussam, advisor; van de Lindt, John, committee member; Schumacher, Russ, committee memberTropical cyclones are intense storm systems that form over warm water but have the potential to bring multiple related hazards ashore. While significant advancements have been made for forecasting of such extreme weather, the estimation for the resulting damage and impact to society is significantly complex and requires substantial improvements. This is primarily due to the intricate interaction of multiple variables contributing to the socio-economic damage on multiple scales. Subsequently, this makes communicating the risk, location vulnerability, and the resulting impact of such an event inherently difficult. To date, the Saffir-Simpson Scale, based off of wind speed, is the main ranking system used in the United States to describe an oncoming tropical cyclone event. There are models currently in use to predict loss by using more parameters than just wind speed. However, they are not actively used as a means to concisely categorize these events. This is likely due to the scrutiny the model would be placed under for possibly outputting an incorrect damage total. These models use parameters such as; wind speed, wind driven rain, and building stock to determine losses. The relationships between meteorological and locational parameters (population, infrastructure, and geography) are well recognized, which is why many models attempt to account for so many variables. With the help of machine learning, in the form of artificial neural networks, these intuitive connections could be recreated. Neural networks form patterns for nonlinear problems much as the human brain would, based off of historical data. By using 66 historical hurricane events, this research will attempt to establish these connections through machine learning. In order to link these variables to a concise output, the proposed Impact Level Ranking System will be introduced. This categorization system will use levels, or thresholds, of economic damage to group historical events in order to provide a comparative level for a new tropical cyclone event within the United States. Discussed herein, are the effects of multiple parameters contributing to the impact of hurricane events, the use and application of artificial neural networks, the development of six possible neural network models for hurricane impact prediction, the importance of each parameter to the neural network process, the determination of the type of neural network problem, and finally the proposed Impact Level Ranking System Model and its potential applications.Item Open Access Community risk due to wildland urban interface fires: a top-down perspective(Colorado State University. Libraries, 2021) Chulahwat, Akshat, author; Mahmoud, Hussam, advisor; Ellingwood, Bruce, committee member; van de Lindt, John, committee member; Stevens-Rumann, Camille, committee memberRecent wildfire events, in the United States and around the world, have resulted in thousands of homes destroyed and many lives lost, leaving communities and policy makers, with the question as to how to manage wildfire risk. Wildland urban interface fires have demonstrated the unrelenting destructive nature of these events and signify the need to address the problem. This is particularly important given the prevalent trend of increased fire frequency and intensity. Current approaches to managing wildfires focus on fire suppression and managing fuel build-up in wildlands. Frequent suppression of small scale fires has led to the absence of a natural reduction mechanism, which in turn, results in low frequency high intensity fires. This phenomena has been termed as the Wildfire paradox and it reinforces the ideology that wildfires are inevitable and are actually beneficial; therefore focus should to be shifted towards minimizing potential losses to communities. However, reliance on these strategies alone has clearly proven inadequate. This requires the development of vulnerability-based frameworks that can be used to provide holistic understanding of risk. Mitigation strategies geared towards complete containment of wildfires within the wildlands are unrealistic. Therefore, the primary goal has to be on making communities resilient, with the purpose of minimizing potential losses. There is a paucity of information regarding the interplay between communities and wildfires. Unlike other hazards, for which there exists significant knowledge base, quantification of WUI fires is still an unanswered question for us. To better understand what factors govern the impact of WUI fires, tools to assess and quantify the risk of wildfires to communities are required. In this study, a probabilistic approach for quantifying community vulnerability to wildfires by applying concepts of graph theory is devised. A directed graph is developed to model wildfire inside a community by incorporating different fire propagation modes. Four modes are considered in this study - Convection, Radiation and Embers, and individual ignition models for each are formulated. Through these modes the graph model accounts for relevant community-specific characteristics including wind conditions, community layout, individual structural features, and the surrounding wildland vegetation. The graph model is then used to evaluate vulnerability of each component of the community using shortest path algorithms. The framework is utilized to study the infamous 1991 Oakland fire in an attempt to unravel the complexity of community fires. Centrality measures from graph theory are used to identify critical behavior patterns and evaluate the effect of fire mitigation strategies. Using the vulnerability framework developed, the risk of communities is further quantified. Risk is generally defined by three components - (1) Hazard intensity (2) Degree of exposure and (3) Exposed elements. In context of wildfires, the risk is formulated by combining the following three components - probability of wildland ignition, probability of fire reaching the community and vulnerability of community. Four different communities across the United States are selected and risk analysis is conducted for the months May-September to understand the correlation between community risk and community characteristics. Unlike current practice, the results are shown to be community-specific with substantial dependency of risk on meteorological conditions, environmental factors, and community characteristics and layout. For the final part of this study, an intervention optimization is formulated and applied to the four communities to observe the effect of different intervention measures on community risk. The findings show the need for exploring unique viable solutions to reduce risk for communities independently, as opposed to embracing a generalized approach, which is currently the case.Item Open Access Digital twins for structural inspection, assessment, and management(Colorado State University. Libraries, 2023) Perry, Brandon J., author; Guo, Yanlin, advisor; Atadero, Rebecca, committee member; van de Lindt, John, committee member; Mahmoud, Hussam, committee member; Ortega, Francisco, committee memberWith the rapid advancements in remote sensing, uncrewed aircraft systems (UAS), computer vision, and machine learning, more techniques to maintain and evaluate the performance of the built infrastructure become available; however, these techniques are not always straightforward to adopt due to the remaining challenges in data analytics and the lack of executable actions that can be taken. The paper proposes a Digital Twin, which is a virtual representation of structures and has a myriad of applications to better assess and manage civil infrastructure. The proposed Digital Twin includes the techniques to store, visualize, and analyze the data collected from a UAS-enabled remote sensing inspection and computational models that support decision-making regarding the maintenance and operation of structures. The data analysis module identifies the location, extent, and growth of a defect over time, the structural components, and connections from the collected image with artificial intelligence (AI) and computer vision. In addition, the three-component (3C) dynamic displacements are measured from videos of the structure. A model library within the digital twin to assess the structure's performance, which includes three types of models, is proposed: 1) a visualization model to provide location-based data query, 2) an automatically generated finite element (FE) model as a basis for simulation, and 3) a surrogate model which can quickly predict a structure's behavior. Ultimately, the models in the library suggest executable actions that can be taken on a structure to better maintain and repair it. A discussion is presented showing how the Digital Twin can assist decision-making for structural management.Item Open Access Essays on the economics of natural disasters(Colorado State University. Libraries, 2020) Hu, Yuchen, author; Cutler, Harvey, advisor; Zahran, Sammy, committee member; Mushinski, David, committee member; van de Lindt, John, committee memberNatural hazards occur frequently, and the costs associated with these events are well into the billions of dollars. The rising frequency and costs from natural disasters require a comprehensive understanding of its impacts on the economic system and mitigation strategies for local communities that can minimize these losses. The purpose of Chapter 1 is to demonstrate a linkage between civil engineering and economic models to accomplish these objectives. To do this, I build a spatial computable general equilibrium model (SCGE) for Shelby County in Tennessee that requires an extensive data set dependent upon eight different data sources. I then develop advanced methods that integrate simulation models from engineering and economics. Civil engineers have created a range of simulation models that estimates the impact of a hypothetical earthquake on damages to buildings, utilities, and transportation network. These damages are integrated into the SCGE model to simulate a range of economic outcomes. I find that the SCGE model is more advanced in capturing the adjustment behaviors of businesses and households to external shocks compared to previous attempts. I also find that to better estimate the economic impacts, we need to simulate the model with the three types of physical damages jointly and not individually. Chapter 2 investigates a hidden layer of the impact of natural disasters, which is the spillover effect due to disaster-induced migration on the receiving areas' labor markets. Using the difference in difference approach, I empirically compare the hourly wage rates in areas that received the evacuees from Hurricane Katrina to areas that didn't. I find that in the export-oriented industry, the inflow of migrants due to Katrina slightly reduces the hourly wage rates for both the low and the high-skilled workers. However, in the localized industries where the inflow of the migrants also increased the demand for local goods and services, the inflow of evacuees raises the hourly wage rates the high-skilled workers and imposes no significant impacts for the low-skilled workers. These results are consistent with previous literature in that immigrants did impact the local labor markets but at a small magnitude. Chapter 3 proposes the setup of a Rainy-Day Fund (RDF) through tax increase/hikes for local governments in preparing for external shocks in the future. To minimize the costs of tax hikes to the economy and achieve the target amount of RDF, I use the SCGE model developed in Chapter 1 to solve for an optimal path of tax hikes over time. The process starts with an endogenized cost function measured by the foregone output that could be produced had there been no changes in the tax system. Built on the profit and utility maximization in response to changes in taxes, the cost function expands the theoretical setup of Barro (1979) and Ghosh (1995) by allowing any factors that influence the output to enter the optimization process. Moreover, the cost function in any period depends on not only the tax rates in that period but also the tax rates in previous periods, since any changes to the tax rates previously can influence the current economy through changes in investment and capital. I find that the optimal trajectory of the tax hikes tends to be rising in time. The rate of the increase depends on the magnitude of labor supply elasticity to real wages, and the interest of the regional planners to include economic outcomes beyond the planning horizon.Item Open Access Experimental assessment of cracked steel beams under mechanical loading and elevated temperatures(Colorado State University. Libraries, 2016) Ahmadi, Bashir, author; Mahmoud, Hussam, advisor; van de Lindt, John, committee member; Strong, Kelly, committee memberBridge fire is a major engineering problem that has been gaining attention by researchers and engineers. As reported in the New York Department of Transportation database, there has been approximately 50 cases of bridge collapse due to fire nationwide with many more cases where fires resulted in repairable damage. The fires are typically due to vehicle crash, arson, and in some cases wildfires. The affected bridges are mostly fabricated from steel, concrete, and temper. The problem of bridge fire is further aggravated by the presence of fatigue cracks in steel bridges. Various experimental and numerical studies have been conducted to evaluate the response of steel beams under elevated temperature. However, to date, there is lack of information on the response of steel beams with pre-existing cracks under elevated temperature. The importance of evaluating cracked steel beams under elevated temperature stems from the fact that many steel bridges that are currently in service suffer from major deteriorations manifested in the presence of fatigue cracks that are the result of cyclic loading from daily traffic. With no available data on failure behavior of cracked steel beams under fire, this thesis introduces a new testing protocol for evaluating the response of cracked steel beams under elevated temperature. Specifically, the results of experimental tests, conducted at the structural engineering laboratory at Colorado State University, of four initially cracked W8x24 steel beams under point loading and non-uniform elevated temperature are presented. The cracks are introduced across the bottom flange and the beams are loaded to failure while being subjected to various non-uniform elevated temperature distributions varying from 200 °C to 600 °C. The competition between two different failure modes: excessive deflection and fracture along the crack plane, is evaluated with respect to temperature distributions in the beams. In cases where fracture prevailed, different types of fractures were observed including brittle fracture, ductile fracture, and brittle/ductile transition failure, which depended on the temperature distribution. The results presented include load versus displacement and time versus temperature curves. In addition, digital image correlation method was utilized to develop strain and displacement fields around the cracked regions. This experimental study provides an alternative method for evaluating cracked beams under elevated temperature and will provide engineers with insight into various behavioral aspects of steel beams under the investigated loading demands. Furthermore, the results of this study can be used to calibrate advanced numerical finite element models, capable of capturing large deformations and fracture, which can in turn be used to conduct a parametric study for various sizes of bridge girders under an ensemble of thermal loading scenarios.Item Open Access Faulting in the Foam Creek Stock, North Cascade Mountains, Washington(Colorado State University. Libraries, 2017) Kahn, Adrian Walter, author; Magloughlin, Jerry, advisor; Ridley, John, committee member; van de Lindt, John, committee memberThe Foam Creek Stock (FCS) is a tonalite pluton in the northwestern part of the Nason Terrane in Washington state, a region that has experienced a wide range of structural regimes. Faults cutting the FCS have been studied through field, microscope, fluid inclusion, and geochemical methods. The purpose of this study is to understand the nature of these faults, and their place in the regional tectonic history. The FCS is cut by two populations (P1 and P2) of small-scale faults that share similar orientations but are microstructurally and geochemically distinct. P1 faults are generally E-dipping and host a distinct, bleached alteration halo and a thin, highly altered fault core containing the secondary minerals adularia (K-feldspar), chlorite, albite, and actinolite, and remnant host minerals quartz and altered plagioclase. P1 fault cores are thin (<1 mm) and display small apparent offsets (<2 cm). P2 faults cut P1 faults, dip N, S, and E, have generally steeper dips and greater displacements (average 14.1 cm apparent offsets) than P1 faults, and host predominantly secondary adularia and fractured host rock within thicker (~3 cm), more well-developed cataclastic fault cores. P1 faults show microstructural evidence of grain boundary bulging in quartz, along with seams that are interpreted to have hosted diffusive mass transfer (DMT) within the cataclastic fault core, suggesting a component of aseismic deformation accommodated some of the strain. In contrast, P2 fault cores range from random fabric to weakly foliated cataclasite, and host aseismic DMT and coseismic pseudotachylyte, indicating strain was accommodated across a wide range of strain rates. Kinematic analysis using both outcrop and microscale observations indicates that P1 faults are reverse, whereas P2 faults are normal and sinistral. Chemical analyses of the fault cores of the two populations, using a portable XRF reveal geochemical changes accompanying faulting. The most significant changes include P1 faults are enriched in Pb (~100%), and depleted in Ti (~50%), Ca, Sr, and Zn, whereas P2 faults are enriched in K (~40%) and Rb, and depleted in Fe (~30%), Mn, Ca, Sr and Zn. Interpretation of textural relationships between primary and secondary minerals suggests the fluids that migrated through both fault populations may have been initially sodic, and a shift in composition produced a sequence of alteration reactions with the host FCS tonalite. The final product of this changing fluid-rock system was adularia precipitation within fault cores, which in turn served to strengthen the faults. Mechanical strengthening likely inhibited reshear of faults, and thus additional strain of the pluton was accommodated along newly nucleated fault planes through slip delocalization. Fluid inclusion microthermometry, combined with observed deformation mechanisms and secondary mineral assemblages, allowed for estimation of temperature of trapping of the fluids, and revealed temperature conditions of ~289 ± 24°C during P1 faulting, and ~262 ± 23°C during P2 faulting. When combined with T-t curves constructed using K-Ar and Ar-Ar data from previous studies, the estimated age of the faults is ~71.9 ± 3.5 Ma for P1 and 69.2 ± 3.5 Ma for P2. Using these ages, it is proposed that Late Cretaceous deformation of the FCS observed in this study records relative counterclockwise rotation of the converging Farallon Plate. The resultant shift from E-W compression to dextral transpression was locally expressed as reverse P1 faults followed by N-S P2 extension with a sinistral component during regional post-metamorphic uplift and dextral shear.Item Open Access Geometrically and materially nonlinear analysis using material point method(Colorado State University. Libraries, 2022) Asiri, Abdullah N., author; Heyliger, Paul, advisor; van de Lindt, John, committee member; Chen, Suren, committee member; Cheney, Margaret, committee memberComputational engineering has become an effective tool for different engineering aspects. It provides suitable simulation models for complex problems. Also, the computational models are strongly recommended as alternatives to experiments due to the consumed cost and time. In addition, because this field has gotten attention earlier, the accuracy of computational models has been improved. The finite element method (FEM) is one of the famous computer simulations that has been adopted widely in scientific and technical fields. It considers an excellent tool for different engineering analyses; however, for the large deformation behavior, the FEM cannot withstand due to the finite discretization of the systems in which the accuracy would be lost as a result of the large distortion that occurred for the model. Thereby, the mesh-less methods are appropriate models for such problems. The material point method (MPM) is one of the improved mesh-less methods, which is an extension of the Particle In Cell (PIC) method used for fluid mechanics modeling. Both static and dynamic applications are intended to simulate the two-dimensional material point method model. The main objective here is to simulate and validate the material point method with the analytical solutions for different solid mechanics applications. Further, to examine the formulation of the nonlinear behavior using the MPM. The research can be achieved by studying two hypotheses: 1) Beam mechanics analysis using the material point method and 2) Damage mechanics analysis using the material point method. Both hypotheses consider different assumptions of the geometry and material constants. Material point simulation of the two hypotheses will be conducted through RMACC Summit Supercomputer using FORTRAN and MATLAB languages.Item Open Access Integration of graphical, physics-based, and machine learning methods for assessment of impact and recovery of the built environment from wind hazards(Colorado State University. Libraries, 2019) Pilkington, Stephanie F., author; Mahmoud, Hussam, advisor; Ellingwood, Bruce, committee member; van de Lindt, John, committee member; Zahran, Sammy, committee member; McAllister, Therese, committee member; Hamideh, Sara, committee memberThe interaction between a natural hazard and a community has the potential to result in a natural disaster with substantial socio-economic losses. In order to minimize disaster impacts, researchers have been improving building codes and exploring further concepts of community resilience. Community resilience refers to a community's ability to absorb a hazard (minimize impacts) and "bounce back" afterwards (quick recovery time). Therefore, the two main components in modeling resilience are: the initial impact and subsequent recovery time. With respect to a community's building stock, this entails the building damage state sustained and how long it takes to repair and reoccupy that building. In modeling these concepts, probabilistic and physics-based methods have been the traditional approach. With advancements in artificial intelligence and machine learning, as well as data availability, it may be possible to model impact and recovery differently. Most current methods are highly constrained by their topic area, for example a damage state focuses on structural loading and resistance, while social vulnerability independently focus on certain social demographics. These models currently perform independently and are then aggregated together, but with the complex connectivity available through machine learning, structural and social characteristics may be combined simultaneously in one network model. The popularity of machine learning predictive modeling across multiple different applications has risen due to the benefit of modeling complex networks and perhaps identifying critical variables that were previously unknown, or the mechanism behind how these variables interacted within the predictive problem being modeled. The research presented herein outlines a method of using artificial neural networks to model building damage and recovery times. The incorporation of graph theory to analyze the resulting models also provides insight into the "black box" of artificial intelligence and the interaction of socio-technical parameters within the concept of community resilience. The subsequent neural network models are then verified through hindcasting the 2011 Joplin tornado for individual building damage and the time it took to repair and reoccupy each building. The results of this research show viability for using these methods to model damage, but more research work may be needed to model recovery at the same level of accuracy as damage. It is therefore recommended that artificial neural networks be primarily used for problems where the variables are well known but their interactions are not as easily understood or modeled. The graphical analysis also reveals an importance of social parameters across all points in the resilience process, while the structural components remain mostly important in determining the initial impact. Final importance factors are determined for each of the variables evaluated herein. It is suggested moving forward, that modeling approaches consider integrating how a community interacts with its infrastructure, since the human components are what make a natural hazard a disaster, and tracing artificial neural network connections may provide a starting point for such integration into current traditional modeling approaches.Item Open Access Multi-scale traffic performance modeling of transportation systems subjected to multiple hazards(Colorado State University. Libraries, 2019) Hou, Guangyang, author; Chen, Suren, advisor; van de Lindt, John, committee member; Atadero, Rebecca, committee member; Trumbo, Craig, committee memberTransportation systems are very vulnerable to natural or manmade hazards, such as earthquakes, floods, hurricanes, tsunamis, terrorism, etc. In the past years, extreme hazards have caused significant physical and functional damages to transportation systems around the world. Disruption of transportation systems by multiple hazards will impede social and commercial activities, and hamper the post-disaster emergency response and long-term recovery of the damaged community. The main purpose of this dissertation is to develop advanced performance assessment techniques of transportation systems subjected to multiple hazards in the link level and network level. It is expected that the developed techniques in this dissertation will help stakeholders to make risk-informed decisions in terms of effective prevention and preparation measures to enhance and facilitate resilience of transportation systems. A suite of simulation methodologies are developed to evaluate the performance of critical transportation components (e.g. bridges and road segments) and transportation networks subjected to multiple hazards in this dissertation. Firstly, an advanced traffic flow simulation framework is developed to predict the post-hazard performance of a typical highway system under hazardous conditions. Secondly, a simulation methodology is developed to study the traffic performance of degraded road links being partially blocked following extreme events. Thirdly, a new approach is proposed to develop travel time functions of partially blocked roads in urban areas through microscopic traffic simulation. Fourthly, an integrated model is developed to assess single-vehicle traffic safety performance of stochastic traffic flow under hazardous driving conditions. Finally, an integrated probabilistic methodology is developed to model the performance of disrupted infrastructures due to fallen urban trees subjected to extreme winds.Item Open Access Natural frequencies of twisted cables: a numerical and experimental study(Colorado State University. Libraries, 2021) Alkharisi, Mohammed K., author; Heyliger, Paul, advisor; van de Lindt, John, committee member; Chen, Suren, committee member; Stright, Lisa, committee memberAs the uses of cables have increased in different engineering applications, a better understating of their mechanical and dynamical behavior becomes more critical. Over the past several decades, many analytical, experimental, and finite element models have been developed to investigate vibrations of the cable structure. This attention explains the importance of such a structure, where it is more challenging than many ordinary structures because of the nonlinearity of the geometry and other combined effects. In addition, the twist along cable length leads to coupling behavior on the various kinematic variables of the cable system. This work is aimed at predicting and investigating the natural frequencies and the translations and rotations mode shapes occurring stimulatingly for both horizontal and inclined sagged cables, using both numerical and experimental methods. An efficient numerical procedure using elasticity-based finite elements is presented to generate the primary elastic stiffness coefficients of single-layered six-wire strands where the cables are subjected to axial and torsional loads in three-dimensional space. Cable models with lay angles varying from 5 to 30 degrees are then compared to eight different one-dimensional analytical models for the same range of angles. The finite element model gives stiffness coefficients that are in good agreement with the analytical models for angles below the maximum angle of the cable. The free vibration behavior of untwisted and twisted cables is then analyzed using the derived stiffness and mass matrices. When discretized over the horizontal span, the sagged cable is represented using transformed axial, coupling, and torsional characteristics where the resulting two-node cable element has three translational and three rotational degrees of freedom. A similar computational approach is used for inclined cables using inclination angles from 10 to 60 degrees. The natural frequencies and modal shapes are found to be in very good agreement in comparison with the results obtained using extensive experimental tests for identical cable geometries and materials. Where a harmonically time-varying support motion is employed, undergo different conditions. The acceleration and angular velocity time histories are then collected by sensors mounted on the mid and quarter span of the cables. In addition to the experimental results, the frequency spectrum and the translational and rotational mode shapes are analyzed and compared with the limited analytical model available from the literature and the computer finite element software ABAQUS. Practical examples are used to demonstrate the validity and applicability of the finite element model for untwisted and twisted cables. Then, the influence of the principal and microstructural parameters variation on the dynamics of the cable is investigated. This study shows that the elasticity, twist coupling, initial sag, inclination angle, and self-weight of the cable play a considerable role in the frequency and modal coupling behavior. It further suggests that some of the simple models available may not be adequate to fully understand the significant levels of modal coupling in the cable's dynamic behavior. The methods used in this study are finally extended to experimentally find the internal damping ratios and the reduction in the in-plane peak motions when a damper is used.Item Open Access Performance of steel structures subjected to fire following earthquake(Colorado State University. Libraries, 2016) Memari, Mehrdad, author; Mehmoud, Hussam, advisor; Ellingwood, Bruce, committee member; van de Lindt, John, committee member; Heyliger, Paul, committee member; Bandhauer, Todd, committee memberFires following earthquakes are considered sequential hazards that may occur in metropolitans with moderate-to-highly seismicity. The potential for fire ignition is elevated by various factors including damage to active and passive fire protections following a strong ground motion. In addition, damage imposed by an earthquake to transportation networks, water supply, and communication systems, could hinder the response of fire departments to the post-earthquake fire events. In addition, the simultaneous ignitions – caused by strong earthquake – might turn to mass conflagrations in the shaken area, which could lead to catastrophic scenarios including structural collapse, hazardous materials release, loss of life, and the inability to provide the emergency medical need. This has been demonstrated through various historical events including the fires following the 1906 San Francisco earthquake and the 1995 Kobe earthquakes, among others, making fire following earthquake the most dominant contributor to earthquake-induced losses in properties and lives in the United States and Japan in the last century. From a design perspective, current performance-based earthquake design philosophy allows certain degrees of damage in the structural and non-structural members of steel-framed buildings during the earthquake. The cumulative structural damages, caused by the earthquake, can reduce the load-bearing capacity of structural members in a typical steel building. In addition, potential damage to active and passive fire protections following an earthquake leaves the steel material exposed to elevated temperatures in the case of post-earthquake fire events. The combined damage to steel members and components following an earthquake combined with damage to fire protection systems can increase the vulnerability of steel buildings to withstand fire following seismic events. Therefore, there is a pressing need to quantify the performance of steel structures under fire following earthquake in moderate-to-high seismic regions. The aim of the study is to assess the performance of steel structural members and systems under the cascading hazards of earthquake and fire. The research commences with evaluation of the stability of hot-rolled W-shape steel columns subjected to the earthquake-induced lateral deformations followed by fire loads. Based on the stability analyses, equations are proposed to predict the elastic and inelastic buckling stresses in steel columns exposed to the fire following earthquake, considering a wide variety of variables. The performance of three steel moment-resisting frames – with 3, 9, and 20 stories – with reduced beam section connections is assessed under multi-story fires following a suite of earthquake records. The response of structural components – beams, columns, and critical connection details – is investigated to evaluate the demand and system-level instability under fire following earthquake. Next, a performance-based framework is established for probabilistic assessment of steel structural members and systems under the combined events of earthquake and fire. A stochastic model of the effective random variables is utilized for conducting the probabilistic performance-based analysis. This framework allows structural engineers to generate fragility of steel columns and frames under multiple-hazard of earthquake and fire. The results demonstrate that instability can be a major concern in steel structures, both on the member and system levels, under the sequential events and highlights the need to develop provisions for the design of steel structures subjected to fire following earthquake. Furthermore, a suite of recommendations is proposed for future studies based on findings in this dissertation.Item Open Access Reliability assessment of the deteriorating reinforced concrete bridges subjected earthquake and foundation scour(Colorado State University. Libraries, 2016) Ren, Jingzhe, author; Ellingwood, Bruce, advisor; van de Lindt, John, committee member; Shuler, Scott, committee memberThis study assesses the structural reliability of a deteriorating reinforced concrete bridge subjected earthquake and foundation scour during its service life. This study relies on probabilistic models of natural hazards and structural deterioration based on in-service inspection and utilizes methods of time-dependent reliability assessment. The results of the study reveal the potential influences of competing hazards on structural response of bridges over their service lives. The thesis is structured in five chapters: (1) Introduction, including motivation and objectives of the study; (2) Literature review, addressing the background of natural hazards modelling and time-dependent reliability assessment; (3) Methods for modelling natural hazards and structural deterioration of bridges probabilistically; (4) Performance assessment of deteriorating bridges under competing hazards, providing numerical measures of structural reliability for a three-span reinforced concrete bridge based on a finite element model; (5) Conclusion and recommendations, summarizing the main research findings and discussing a possible direction for further studies.Item Open Access Surrogate modeling for efficient analysis and design of engineering systems(Colorado State University. Libraries, 2021) Li, Min, author; Jia, Gaofeng, advisor; Ellingwood, Bruce, committee member; van de Lindt, John, committee member; Gao, Xinfeng, committee memberSurrogate models, trained using a data-driven approach, have been extensively used to approximate the input/output relationship for expensive high-fidelity models (e.g., large-scale physical experiments and high-resolution computationally expensive numerical simulations). The computational efficiency of surrogate models is greatly increased compared with the high-fidelity models. Once trained, the original high-fidelity models can be replaced by the surrogate models to facilitate efficient subsequent analysis and design of engineering systems. The quality of surrogate based analysis and design of engineering systems relies largely on the prediction accuracy of the constructed surrogate model. To ensure the prediction accuracy, the training data should be adequate in terms of the size of the training data and their sampling. Unfortunately, constrained by limited computational budgets, typically it is challenging to obtain a lot of training data by running high-fidelity models. Furthermore, significant challenge arises in obtaining sufficient training data for problems with high-dimensional model inputs due to the well-known curse of dimensionality. In order to build surrogate models with high prediction accuracy and generalization performance while using as less computational resources as possible, this dissertation proposes several advanced strategies and examines their performances within several practical engineering applications. The fundamental idea of the proposed strategies is to embed extra knowledge about the high-fidelity models in the surrogate model by enriching the training data (e.g., leverage additional low-fidelity data, or censored/bounded data) and enhancing model assumption (e.g., explicitly incorporate prior knowledge about the physics of the problem, or explore low-dimensional latent structures/features), which reduces the required size of high-fidelity training data and meanwhile effectively boosts the prediction accuracy of the established surrogate model. Among different surrogate models, Gaussian process models have been gaining popularity due to its flexibility in modeling complex functions and ability to provide closed-form predictive distributions. Therefore, the strategies are developed in the context of Gaussian process model, but the ideas are expected to be applicable to other types of surrogate models. In particular, this dissertation (i) develops a physics-constrained Gaussian process model to efficiently incorporate our prior knowledge about physical constraints/characteristics of the input/output relationship by designing specific kernels, (ii) proposes a general multi-fidelity Gaussian process model capable of integrating training data with different level of accuracy (i.e., both high-fidelity data and low-fidelity data) and completeness (i.e., both accurate data and censored data), and (iii) develops an efficient surrogate modeling approach for problems with high-dimensional binary model inputs by integrating dimension reduction technique and Gaussian process model, and investigates its application in design optimization problems. The excellent performance of the proposed strategies are then validated through analysis and design of several different engineering systems, including (i) calculating hydrodynamic characteristics of wave energy converters (WECs) in an array, (ii) predicting the deformation capacity of reinforced concrete columns under cyclic loading, and (iii) optimizing topology of periodic structures.Item Open Access Systems engineering analysis and application to the Emergency Response System(Colorado State University. Libraries, 2021) Marzolf, Gregory S., author; Sega, Ronald, advisor; Bradley, Thomas, advisor; Simske, Steve, committee member; van de Lindt, John, committee memberThis research seeks to apply systems engineering methods to build a more effective emergency response system (called the Engineered Emergency Response System – EERS) to minimize adverse impacts and consequences of incidents. Systems engineering processes were used to identify stakeholder needs and requirements, and then systems engineering methodologies were used to build the system. Emphasis was placed on building a more capable engineered system that could handle not only routine emergencies, but also events containing increased complexity, uncertainty, and severity. The resulting EERS system was built on suitability constraints including conformance to the National Response Framework, the National Incident Management System Framework, and the community fragility concept, as well as ease of transformation from the existing system. Empirical data from two complex events in Colorado's El Paso County, the Waldo Canyon Wildland Urban Interface fire in 2012 and the Black Forest Wildland Urban Interface fire in 2013, were used to inform the system's design and operation. These complex and dynamic events were deemed representative of other complex events based on existing publications and research. After the engineered system was built, it was validated: 1) using the Functional Dependency Network Analysis model with data obtained from the two fires, 2) evaluating best practices that were integrated into the EERS, 3) qualitatively assessing system suitability requirements, and 4) conducting a Delphi study to assess the value of applying systems engineering to this research area; and, the feasibility of implementing the EERS into existing systems. The validation provided evidence that the EERS is more effective than the existing system while showing that it is also suitable and feasible. The Delphi study provided evidence that using the systems engineering approach was deemed valuable by the subject matter experts. More research is needed to determine system needs and capabilities for specific communities in consideration of their unique organizations, cultures, environments, and associated hazards, and in areas of command and control and communications.