Theses and Dissertations
Permanent URI for this collection
Browse
Browsing Theses and Dissertations by Title
Now showing 1 - 20 of 661
Results Per Page
Sort Options
Item Open Access A collaborative planning framework for integrated urban water management with an application in dual water supply: a case study in Fort Collins, Colorado(Colorado State University. Libraries, 2018) Cole, Jeanne Reilly, author; Sharvelle, Sybil, advisor; Grigg, Neil, advisor; Arabi, Mazdak, committee member; Goemans, Chris, committee memberUrban water management is essential to our quality of life. As much of our urban water supply infrastructure reaches the end of its useful life, water managers are using the opportunity to explore alternative strategies that may enable them to better meet modern urban water challenges. Water managers must navigate the labyrinth of balancing stakeholder needs, considering all costs and benefits, reducing decision risk, and, most importantly, ensuring public health and protecting the environment. Innovative water managers need guidance and tools to help manage this complex decision space. This dissertation proposes a collaborative, risk-informed, triple bottom line, multi-criteria decision analysis (CRTM) planning framework for integrated urban water management decisions. The CRTM framework emerged from the obstacles and stakeholder needs encountered during a study evaluating alternative dual water supply strategies in Fort Collins, Colorado. The study evaluated four strategies for the dual supply of raw and treated water including centralized and decentralized water treatment, varying distribution system scales, and integration of existing irrigation ditches with raw water landscape irrigation systems. The results suggest that while the alternative dual water supply strategies offer many social and environmental benefits, the optimal strategies are dependent on local conditions and stakeholder priorities. The sensitivity analysis revealed the key parameters driving uncertainty in alternative performance were regulatory and political reinforcing the importance of participation from a wide variety of stakeholders. Evaluation of the decision process suggests the CRTM framework increased knowledge sharing between study participants. Stakeholder contributions enabled a comprehensive evaluation of the option space while examining the financial, social and environmental benefits and trade-offs of the alternatives. Most importantly, evolving the framework successfully maintained stakeholder participation throughout the study.Item Open Access A combined field analysis and modeling approach for assessing the impact of groundwater pumping on streamflow(Colorado State University. Libraries, 2018) Flores, Luke, author; Bailey, Ryan T., advisor; Gates, Timothy K., committee member; Sanford, William E., committee memberThe magnitude of volumetric water exchange between streams and alluvial aquifers impacts contaminant transport rates, channel erosion and sedimentation, nutrient loading, and aquatic and riparian habitat. Quantifying the interactions between stream water and groundwater is also critically important in regions where surface water and tributary groundwater are jointly administered under a prior appropriation doctrine, such as in the western United States. Of particular concern is the effect of a nearby pumping well on streamflow. When the cone of influence of a pumping well reaches a nearby stream, the resulting hydraulic gradient can induce enhanced seepage of streamflow into the aquifer or decrease the rate of groundwater discharge to the stream. The change in these rates is often modeled using analytical or numerical solutions, or some combination of both. Analytical solutions, although simple to apply, can produce discrepancies between field data and model output due to assumptions regarding stream and aquifer geometry and homogeneity of hydraulic parameters. Furthermore, the accuracy of such models has not been investigated in detail due to the difficulty in measuring streamflow loss in the field. In the first part of this thesis, a field experiment was conducted along a reach of the South Platte River in Denver, Colorado to estimate pumping-induced streamflow loss and groundwater head drawdown, and compare data against analytical modeling results. The analytical solutions proved accurate if streamflow was low and constant, but performed poorly if streamflow was high and variable. In particular, the models are not capable of accurately simulating the effects of increasing stream width and bank storage due to rapid increases in streamflow. To better account for these effects a new analytical modeling framework is introduced which accounts for all major factors contributing to streamflow loss for a given site for both periods of pumping and periods between pumping. For the reach analyzed herein, the method illustrates that pumping wells often only caused half of the given streamflow loss occurring along the reach. This method can be used in other stream-aquifer systems impacted by nearby pumping. The U.S. Geological Survey's three-dimensional finite-difference groundwater flow model, MODFLOW, was also used to assess the impacts of pumping on streamflow. While MODFLOW removes many of the restrictive assumptions that define analytical solutions, certain limitations persist when the program is applied on local, fine scales with dynamic interactions between a stream and alluvium. In particular, when the average stream width is greater than the computational grid cell size, the model will return systematically biased, grid-dependent results. Moreover, simulated streamflow loss will be limited in the range of values that can be modeled. To address these limitations, a new stream module is presented which (1) allows for streams to dynamically span multiple computational grid cells over a cross section to allow for a finer mesh; (2) computes streamflow and backwater stage along a stream reach using the quasi-steady dynamic wave approximation to the St. Venant equations, which allows for more accurate stream stages when normal flow cannot be assumed or a rating curve is not available; and (3) incorporates a process for computing streamflow loss when an unsaturated zone develops under the streambed. Streamflow loss is not assumed constant along a cross section. It is shown that most streamflow loss occurs along stream banks and over newly inundated areas after increases in upstream streamflow. The new module is tested against streamflow and groundwater data collected in a stream-aquifer system along the South Platte River in Denver, Colorado and to estimate the impact of nearby pumping wells on streamflow. When compared with existing stream modules more accurate results are obtained from the new module. The new module can be applied to other small-scale stream-aquifer systems.Item Open Access A comparison of electrocoagulation and chemical coagulation treatment effectiveness on frac flowback and produced water(Colorado State University. Libraries, 2015) Hutcherson, John Ryan, author; Carlson, Ken, advisor; Omur-Ozbek, Pinar, committee member; Stednick, John, committee memberDevelopment and production of tight shale for crude oil and natural gas is increasing rapidly throughout the United States and especially in the Wattenberg field of Northern Colorado. Hydraulic fracturing is used to stimulate the shale formation, which allows previously trapped oil and gas to flow to the surface. According to Goodwin (2013), approximately 2.8 million gallons of water are required to hydraulically fracture a horizontal well. Freshwater makes up the vast majority of water used to create these fracturing fluids with a small portion coming from recycling of previously used fracturing fluid. In a semi-arid climate such as Northern Colorado, there are multiple demands for freshwater, often exceeding the supply. Once a well is fractured, water flows back to the surface along with the targeted oil and gas. This fluid is typically referred to as flowback or produced water. In some areas around the United States as much as 10 barrels of water flows to the surface for every barrel of oil recovered. For the purposes of this research, flowback is defined as water that flows to the surface within the first 30 days after fracturing. After fracturing, up to 71% of the water (produced water) used to fracture the well flows back to the surface along with oil and gas, with approximately 27% flowing back in the first 30 days (Bai et al, 2013). The flowback and produced water is currently being disposed of either by deep underground injection or in evaporation ponds. There has been very little effort to capture, recycle, and reuse this flowback or produced water as it has traditionally been considered a waste product. Due to the limited freshwater supply in Colorado, recycling and reuse should be explored in greater detail and with a sense of urgency. The ultimate goal for the oil and gas industry should be to recycle and reuse 100% of flowback and produced water in the creation of hydraulic fracturing fluid for other production wells, creating a closed-loop system. Before flowback and produced water can be reused, treatment of the water is required. Treatment for reuse typically consists of removal of solids, organic compounds, and some inorganic ions. Historically, chemicals have been the dominant method used for coagulation to remove solids, as they are readily available and in many cases can be cheaper than other methods. Electrocoagulation (EC) is now also being considered as a produced water treatment method. EC involves running electric current across metal plates (sacrificial anodes) in a solution, which creates an in situ coagulant dose (Emamjomeh and Sivakumar 2008). There is a time component to water quality changes over the life of a well. Early flowback typically has higher concentration of aluminum, solids, and total organic carbon (TOC) as it is influenced mostly by the makeup of the fracturing fluid. At some point around the 30-day mark, a transition in water quality begins. The formation or connate water seems to have a greater influence on water quality than does the fracturing fluid. Treatment seems to correlate to the changing water quality, as treatment is less effective on the early flowback compared to produced water. TOC and low ionic strength may be the reason early flowback is more difficult to treat. Also, chemical coagulation (CC) is more effective than EC at removing TOC and aluminum in early flowback water compared to EC, while EC is more effective at removing iron. However, both treatments are effective after day 27.Item Open Access A cross sector evaluation comparing nutrient removal strategies in urban water systems(Colorado State University. Libraries, 2019) Hodgson, Brock, author; Sharvelle, Sybil, advisor; Arabi, Mazdak, committee member; Carlson, Ken, committee member; Hoag, Dana, committee memberWater supply management and reduction of nutrient pollution from urban water systems are two of the most important issues facing utility managers today. To better protect water supplies, many states have or are establishing total nitrogen (TN) and/or total phosphorous (TP) loading restrictions from urban water systems. Traditionally, these targets are met by wastewater treatment facility (WWTF) improvements, but stringent regulations can make this challenging and costly. As regulations increase it may be necessary or more cost effective to consider additional options for nutrient removal from urban water systems including water management practices or stormwater control measures (SCMs). There are a wide range treatment approaches that can be considered at a WWTF for improving nutrient removal but evaluating these scenarios can be challenging and is traditionally accomplished via mechanistic models specific to individual WWTFs requiring process expertise and a rigorous sampling and analysis program. Water management practices are traditionally considered for water supply improvement, however there is little research to characterize the impact on water quality. There is a need for additional research and tools that facilitate estimating effectiveness of various nutrient removal technologies and consider cross sector strategies and tradeoffs between adoption of practices. To understand the impacts of water management practices, the impact of indoor conservation, source separation, and graywater and effluent reuse on WWTF influent and effluent and downstream water quality was characterized identifying which practices can potentially help meet nutrient reduction targets. For WWTF technologies, previously calibrated and validated mechanistic models were used to develop a simplified empirical model to more easily estimate and compare the effectiveness of various WWTF technologies as a function of influent wastewater quality. The findings from the water management practice evaluation and WWTF treatment comparison provided the framework for conducting an urban water systems evaluation by using the developed empirical models combined with the benefit of stormwater control measures (SCMs) characterized via the Simple Method to evaluate a multitude of strategies for meeting nutrient removal targets in the urban water system. Lastly, this research considered the impacts on biosolids management with the increase of liquid stream removal at the WWTF. The research identified source separation and effluent reuse as frequent part of effective nutrient removal strategies and part of an optimal nutrient removal strategy, and even necessary under stringent nutrient requirements. In terms of wastewater treatment, the benefit of adopting more advanced wastewater treatment processes will be most beneficial in carbon limited WWTFs, and negligible when there is adequate carbon for biological nitrogen and phosphorous removal. This includes sophisticated processes like nitrite shunt and 5-Stage Bardenpho and sidestream processes like struvite precipitation and ammonia stripping. While improvements to WWTF are likely with adoption of stringent nutrient regulations a multi objective optimization identified water management practices and SCMs to be part of all non-dominated nutrient removal strategies. As nutrient requirements become more stringent, the options for WWTFs in terms of processes are limited and frequently a combination of water management practices and SCMs is necessary. This was demonstrated via a systems analysis of cost-effective nutrient removal solutions in urban water systems that can be easily applied to other urban systems because of the empirical models developed with this research. These tools are necessary to help utility managers identify optimal nutrient removal strategies. As utilities invest in improvements to WWTF operations, there may also be notable impacts on biosolids management, primarily in terms of phosphorous, which may limit land application rates resulting in additional cost or disposal of biosolids that historically have been beneficially used in agriculture. These impacts must also be considered by utility managers when considering optimal nutrient removal strategies from urban water systems.Item Embargo A data-driven characterization of municipal water uses in the contiguous United States of America(Colorado State University. Libraries, 2024) Chinnasamy, Cibi Vishnu, author; Arabi, Mazdak, advisor; Sharvelle, Sybil, committee member; Warziniack, Travis, committee member; Goemans, Christopher, committee memberMunicipal water systems in the United States (U.S.) are facing increasing challenges due to changing urban population dynamics and socio-economic conditions as well as from the impacts of weather extremities on water availability and quality. These challenges pose a serious risk to the municipal water providers by hindering their ability to continue providing safe drinking water to residents while also securing adequate supply for economic growth. A data-driven approach has been developed in this study to characterize the trends, patterns, and urban scaling relationships in municipal water consumption across the Contiguous United States. Then using sophisticated and robust statistical methods, water consumption patterns are modeled, identifying key climatic, socio-economic, and regional factors. The first chapter of this data-driven study looked at municipal water uses of 126 cities and towns across the U.S. from 2005 to 2017, analyzing the temporal trends and spatial patterns in water consumption and identifying the influencing factors. Water usage in gallons per person per day, ratio of commercial, industrial, and institutional (CII) to Residential water use, and percent outdoor water consumption were statistically calculated using aggregated monthly and annual water use data. The end goal was to statistically relate the variations in CII to Residential water use ratio across the municipalities with their local climatic, socio-economic, and regional factors. The results indicate an overall decreasing trend in municipal water use, 2.6 gallons per person annually, with greater reductions achieved in the residential sector. Both Residential and CII water use exhibit significant seasonality over an average year. Large cities, particularly in the southern and western parts of the U.S. with arid climates, had the highest demand for water but also showed the largest annual reductions in their per capita water consumption. This study also revealed that outdoor water use varied significantly from 3 to 64 percent of the Total water consumption across the U.S., and it was highest in smaller cities in the western and arid regions. Factors such as April precipitation, annual vapor pressure deficit, number of employees in the manufacturing sector, total percentage of houses built before 1950, and total percentage of single-family houses explain much of the variation in CII to Residential water use ratio across the CONUS. The second chapter leverages high-resolution, smart-metered water use data from over 900 single-family households in Arizona for the water year 2021. This part of the study characterizes the determinants or drivers of water consumption patterns, specifically in single-family households, and presents a framework of statistical methods for analyzing smart-metered water consumption data in future research. A novel approach was developed to characterize household appliance efficiency levels using clustering techniques on 5-second interval data. Integrating water consumption data with detailed spatial information of the household and building characteristics, along with local climatic factors, yielded a robust mixed-effects model that captured the variations in household water uses with high accuracy at a monthly time-step. Local air temperature, household occupancy level, presence of a swimming pool, the year the household was built, and the efficiency of indoor appliances and irrigation systems were exhibited to be the key factors influencing variations in household water use. The third and fourth chapter of this study reanalyzed the water consumption data of those 126 municipalities. The third chapter dwelled into the estimation of the state of water consumption efficiencies or economics of scale in the municipal water systems using an econometrics framework called urban scaling theory. A parsimonious mixed-effects model that combined the effects of socio-economic, built environment, and regional factors, such as climate zones and water use type, was developed to model annual water uses. The results confirm efficiencies in water systems as cities grow and become denser, with CII water use category showing the highest efficiency gains followed by the Residential and Total water use categories. A key finding is the estimation of the unique variations in water use efficiency patterns across the U.S. These variations are influenced by factors such as population, housing characteristics, the combined effects of climate type and geographical location of the cities, and the type of water use category (Residential or CII) that dominates in each city. The fourth or the final chapter synthesizes the lessons learned previously about the drivers of municipal water uses and explores the development of a model for predicting monthly water consumption patterns using machine learning algorithms. These algorithms demonstrated improved capabilities in predicting the Total monthly water use more accurately than the previous modeling efforts, also controlling for factors with multi-collinearity. Climatic variables (like precipitation and vapor pressure deficit), socio-economic and built environment variables (such as income level and housing characteristics), and regional factors (including climate type and water use type dominance in a city), were confirmed by the machine learning algorithms to strongly influence and cause variations in the municipal water consumption patterns. Overall, this study showcases the power of data-driven approaches to effectively understand the nuances in municipal water uses. Integration of the lessons learned and the statistical frameworks used in this study can empower water utilities and city planners to manage municipal water demands with greater resiliency and efficiency.Item Open Access A facilitated process and online toolset to analyze complex systems and coordinate active watershed development and transformation(Colorado State University. Libraries, 2014) Herzog, Margaret T., author; Labadie, John W., advisor; Grigg, Neil S., advisor; Sharvelle, Sybil, committee member; Lacy, Michael G., committee member; Clayshulte, Russell N., committee memberIntegrated Water Resources Management (IWRM) coordinates public, private, and nonprofit sectors in strategic resource development, while emphasizing holistic environmental protection. Without more integrated efforts, adverse human affects to water, other natural resources, and ecosystems services may worsen and cause more unintended cross-scale effects. Meanwhile, fragmented jurisdictional controls and competing demands continue to create new obstacles to shared solutions. Lack of coordination may accentuate negative impacts of extreme events, over-extraction, and other, often unrecognized threats to social-ecological systems integrity. To contend with these challenges, a research-based, facilitated process was used to design an online toolset to analyze complex systems more holistically, while exploring more ways to coordinate joint efforts. Although the focus of the research was the watershed scale, different scales of social-ecological problems may be amenable to this approach. The process builds on an adaptive co-management (ACM) framework. ACM promotes systems-wide, incremental improvements through cooperative action and reflection about complex issues affecting social-ecological systems at nested and overlapping scales. The resulting ACM Decision Support System (DSS) process may help reduce fragmentation in both habitat and social structure by recognizing and encouraging complex systems reintegration and reorganization to improve outcomes. The ACM DSS process incorporates resilience practice techniques to anticipate risks by monitoring drivers and thresholds and to build coordinated coping strategies. The Bear Creek Watershed Association (BCWA) served as a case study in nutrient management, which focused on understanding and mitigating the complex causes of cultural eutrophication in Bear Creek Reservoir - a flood control reservoir to which the entire watershed drains. The watershed lies in the Upper South Platte River Basin -the eastern mountain headwaters to metropolitan Denver, Colorado in the United States. To initiate Phase I of the ACM DSS process, qualitative data on issues, options, social ties, and current practices were triangulated through organizational interviews, document review, a systems design group, and ongoing BCWA, community, river basin, and state-level participation. The mixed methods approach employed geographic information systems (GIS) for spatial analysis, along with statistical analysis and modeling techniques to assess reported issues and potential options quantitatively. Social network analysis (SNA) was used systematically to evaluate organizational relationships, transactions, and to direct network expansion towards a more robust core-periphery network structure. Technical and local knowledge developed through these methods were complimented by ongoing academic literature review and analysis of related watershed efforts near and far. Concurrently, BCWA member organizations helped to incrementally design and test an online toolset for greater emphasis on ACM principles in watershed program management. To date, online components of the ACM DSS include issues reporting, interactive maps, monitoring data access, group search, a topical knowledge base, projects and options tracking, and watershed and lake management plan input. Online toolset development complimented assessment by formalizing what was learned together throughout the ACM DSS process to direct subsequent actions to align with this approach. Since the online system was designed using open source software and a flexible content management system, results can be readily adapted to serve a wider variety of purposes by adjusting the underlying datasets. The research produced several potentially useful results. A post-project survey averaged 9.3 on a 10-point satisfaction scale. The BCWA board adopted the resulting ACM DSS process as a permanent best management practice, funding a facilitator to continue its expansion. A network weaver to continually foster cooperation, a knowledge curator to expand shared knowledge resources, and a systems engineer to reduce uncertainty and ambiguity and dissect complexity were all found to be critical new roles for successful ACM implementation. Watershed program comparisons also revealed ten qualities that may promote ACM. The technical analysis of nutrient issues revealed that phosphorus enrichment from phosphorus desorption from fine sediments contributed to cultural eutrophication through several distinct mechanisms, which may be addressed through a wider range of non-point source controls and in-lake management options. Potential affects from floods, wildfires, and droughts were assessed, which has resulted in more coordinated, proactive plans and studies. Next steps include formulating multi-institutional, multi-level academic studies in the watershed, expanding community engagement efforts, and establishing innovation clusters. Multi-disciplinary research needs include studying nutrient exchange processes, piloting decentralized wastewater treatment systems, optimizing phosphorus removal processes, chemically blueprinting nutrient source streams, and developing an integrated modeling framework. At least four additional stages of development are planned to refine and mature the ACM DSS process over time. The ACM DSS process is also being considered for other places and IWRM problem sets.Item Open Access A finite element analysis of flexible debris-flow barriers(Colorado State University. Libraries, 2018) Debelak, Aliena Marie, author; Bareither, Christopher A., advisor; Mahmoud, Hussam N., committee member; Stright, Lisa, committee memberThe objective of this study was to simulate the stress-displacement behavior of a flexible debris-flow mitigation structure with a three-dimensional finite element model (FEM). Flexible, steel ring-net structures are becoming state-of-practice for debris-flow mitigation in mountainous terrain. These structures have been shown effective in geohazard mitigation; however, design of these structures commonly does not incorporate coupled interactions between debris flow mechanics and stress-strain response of the steel structure Thus, this study focused on assessing the effectiveness of using a FEM model in ABAQUS to simulate coupled behavior encountered in a flexible debris-flow mitigation structure. The debris flow was modeled as a series of rectangular solid blocks and the flexible debris-flow barrier was modeled as a series of three individual parts – braking elements, cables, and rings. The primary model outputs evaluated were the temporal and spatial relationships of forces within the structure and final barrier deformation. A full-scale field experiment from literature was used as a benchmark test to validate FEM simulations, and subsequently the FEM was used to assess barrier sensitivity via a parametric study. Parameters were chosen to represent common geotechnical variables of the debris flow and structural variables of the steel, ring-net structure.Item Open Access A flood frequency derivation technique based on kinematic wave approach(Colorado State University. Libraries, 1987) Cadavid, Luis Guillermo, author; Obeysekera, J. T. B., advisor; Salas, Jose D., committee member; Schumm, Stanley Alfred, 1927-, committee memberThe present study deals with the derivation of a methodology to obtain a flood frequency distribution, for small ungaged watersheds, where the overland flow phase is considered to be an important timing component. In the hydrological literature, this technique comprises three components: a rainfall infiltration model, an effective rainfall-runoff model and the probabilistic component. The study begins with a review of the Geomorphological Instantaneous Unit Hydrograph (GIUH), in order to establish its applicability to the aforementioned type of watersheds. Some effective rainfall-runoff models currently used in the practice of hydrology, like the GIUH and models based on Kinematic Wave approach, lack the required features or do not consider all possible responses within the watershed. Therefore, a new model is developed and calibrated, based on Kinematic Wave approach, for a first order stream with two symmetrical lateral planes. The model is conformed by analytical and approximate solutions, the latter improved via regression analysis. The formulated model is used along with a statistical distribution for the effective rainfall intensity and effective duration, in order to derive the flood frequency distribution technique through the probabilistic component. The structure of the equations considered in the different components imposes a numerical algorithm to compute the flood frequency distribution curve for a given watershed. The derived technique is proved for hypothetical and real watershed configurations, showing its capability to forecast flood frequency curves for ungaged watersheds and to account for the influence of parameters on the physics of flood formation. Actual watersheds are conceived as first order streams with two symmetrical planes.Item Open Access A floor slab damper and isolation hybrid system optimized for seismic vibration control(Colorado State University. Libraries, 2014) Engle, Travis J., author; Mahmoud, Hussam N., advisor; Bienkiewicz, Bogusz J., committee member; Clevenger, Caroline M., committee memberDamage and fatigue to structures due to earthquake loading has cost millions of dollars in repair and reconstruction over the last century. Limited reduction in seismic excitation has been gained through base isolation and tuned mass damping theories. Both theories have limitations that reduce the effectiveness of the system. Getting around these limitations is necessary to accomplish the goals of the study. An innovative design utilizing aspects of both isolation and tuned mass damping is developed by allowing the floor slabs of the structure to displace relative to the frame of the structure. Equations of motion are developed to model this unique system. This system is then optimized and the efficiency of the design is assessed. The reduction of this response over a range of frequencies is the goal of this optimization and thesis. Vibration control is achieved in this system by attempting to remove the mass of the floor slabs from the inertia of the system. When excited, the structure moves while the slabs remain stationary. This greatly reduces the stress on the frame. In this way, the design is a friction isolation and damping hybrid system. The relative motion between the frame and the slab has to be controlled. To control its displacement, the slab is supported by a curved support and bumpers are added. These additions utilize aspects of translational and pendulum tuned mass damper systems and force the slab back to its original location after excitation. This system imitates multi-tuned mass damper systems as well by utilizing multiple floor slabs on multiple stories. Because of the large mass of the floor slabs the system is more effective than any of the standard tuned mass damper systems. The system is optimized for its total response over a range of frequencies compared to a standard composite structure over those same frequencies by adjusting the combination of stories that are activated, the radius of curvature of the slab support, the stiffness of the bumpers, and the coefficient of friction of the contact surface between the support and slab. The response is a normalized multi-objective optimization of the acceleration, global drift, interstory drift, and relative slab drift. The optimized structures can be tested by real seismic records to demonstrate their effectiveness.Item Open Access A framework for life-cycle cost optimization of buildings under seismic and wind hazards(Colorado State University. Libraries, 2014) Cheng, Guo, author; Mahmoud, Hussam, advisor; Atadero, Rebecca, committee member; Strong, Kelly, committee memberThe consequential life and economic impact resulting from the exposure of building structures to single hazards have been well quantified for seismic and wind loading. While it has been recognized that structures are likely to be subjected to multiple hazards during their service life, designing for such scenario has been achieved by as considering the predominant hazard. Although from a structural reliability perspective, this might be a reasonable approach, it does not necessarily result in the most optimal life-cycle cost for the designed structure. Although such observation has been highlighted in recent studies, research is still needed for developing an approach for multi-hazard life-cycle optimization of structures. This study presents a framework, utilizing structural reliability, for cost optimization of structures under wind and seismic hazards. Two example structures, on which the framework is applied, are investigated and their life-cycle cost analyzed. The structures represent typical medium and high rise residential buildings located in downtown San Francisco area. The framework comprises of using the first order reliability method (FORM), programed in MATLAB and interfaced with ABAQUS finite element software to obtain the corresponding reliability factors for the buildings under various loading intensities characterized by the probability of exceedance. The finite element analyses are carried out based on real seismic and wind pressure records using nonlinear finite element time-history dynamic analysis. The random variables selected include hazard intensity (wind load and seismic intensity) and elastic modulus of steel. Once the failure probabilities are determined for the given limit state functions, the expected failure cost for the building service duration considering earthquake or wind hazard, or both, is calculated considering discount rate. The expected life-cycle cost is evaluated using life-cycle cost function, which includes the initial construction cost and the expected failure cost. The results show that the optimal building design considering the wind hazard alone, the seismic hazard alone or a combination of both is different. The framework can be utilized for an optimal design of both wind and seismic load for a given level of hazard intensity.Item Open Access A framework for the analysis of coastal infrastructure vulnerability under global sea level rise(Colorado State University. Libraries, 2017) O'Brien, Patrick S., author; Julien, Pierre Y., advisor; Watson, Chester C., committee member; Ettema, Robert, committee member; Rathburn, Sara L., committee memberThe assumption of hydrologic stationarity has formed the basis of coastal design to date. At the beginning of the 21st century, the impact of climate variability and future climate change on coastal water levels has become apparent through long term tide gauge records, and anecdotal evidence of increased nuisance tidal flooding in coastal areas. Recorded impacts of global sea rise on coastal water levels have been documented over the past 100 to 150 years, and future water levels will continue to change at increasing, unknown rates, resulting in the need to consider the impacts of these changes on past coastal design assumptions. New coastal infrastructure plans, and designs should recognize the paradigm shift in assumptions from hydrologic stationarity to non-stationarity in coastal water levels. As we transition into the new paradigm, there is a significant knowledge gap which must address built coastal infrastructure vulnerability based on the realization that the underlying design assumptions may be invalid. A framework for the evaluation of existing coastal infrastructure is proposed to effectively assess vulnerability. The framework, called the Climate Preparedness and Resilience Register (CPRR) provides the technical basis for assessing existing and future performance. The CPRR framework consists of four major elements: (1) datum adjustment, (2) coastal water levels, (3) scenario projections and (4) performance thresholds. The CPRR framework defines methodologies which: (1) adjust for non-stationarity in coastal water levels and correctly make projections under multiple scenarios; (2) account for past and future tidal to geodetic datum adjustments; and (3) evaluate past and future design performance by applying performance models to determine the performance thresholds. The framework results are reproducible and applicable to a wide range of coastal infrastructure types in diverse geographic areas. The framework was applied in two case studies of coastal infrastructure on the east and west coasts of the United States. The east coast case study on the Stamford Hurricane Barrier (SHB) at Stamford CT, investigated the navigation gate closures of the SHB project. The framework was successfully applied using two performance models based on function and reliability to determine the future time frame at which relative sea level rise (RSLR) would cause Navigation Gate closures to occur once per week on average or 52 per year. The closure time analysis also showed the impact of closing the gate earlier to manage internal drainage to the Harbor area behind the Stamford Hurricane Barrier. These analyses were made for three future sea level change (SLC) scenarios. The west coast case study evaluated four infrastructure elements at the San Francisco Waterfront, one building and three transportation elements. The CPRR framework applied two performance models based on elevation and reliability to assess the vulnerability to flooding under four SLC scenarios. An elevation-based performance model determined a time horizon for flood impacts for king tides, 10 and 100-year annual exceedance events. The reliability-based performance model provided a refinement of results obtained in the elevation-based model due to the addition of uncertainty to the four infrastructure elements. The CPRR framework and associated methodologies were successfully applied to assess the vulnerability of two coastal infrastructure types and functions in geographically diverse areas on the east and west coasts of the United States.Item Open Access A K-ϵ turbulence model for predicting the three-dimensional velocity field and boundary shear in closed and open channels(Colorado State University. Libraries, 1995) Hafez, Youssef Ismail, author; Gessler, Johannes, advisor; Thompson, Erik, advisor; Molinas, Albert, committee member; Bienkiewicz, Bogusz, committee member; Georg, Kurt, committee memberTo view the abstract, please see the full text of the document.Item Open Access A mass balance approach to resolving the stability of LNAPL bodies(Colorado State University. Libraries, 2010) Mahler, Nicholas T., author; Sale, Thomas C., advisor; Bau, Domenico A., committee member; McWhorter, David B., committee memberLight non-aqueous phase liquids (LNAPLs) are commonly present in soils and groundwater beneath petroleum facilities. When sufficient amounts of LNAPL have been released continuous bodies of LNAPL form. These bodies can have detrimental impacts to soil gas and groundwater. Furthermore, with time they can expand or translate laterally. Measurements of LNAPL flux within continuous bodies typically indicate that LNAPL is moving, albeit slowly. Commonly, these fluxes have been used to infer (by continuity) that the bodies as a whole are expanding and/or translating laterally. In conflict with this, dissolved plumes downgradient of LNAPL bodies are widely thought to be stable or shrinking due to natural attenuation. The hypothesis of this research is that natural losses of LNAPL in contiguous bodies can play an important role in limiting expansion and/or lateral translation of LNAPL bodies. Much like dissolved phase plumes, LNAPL bodies can be stable when internal fluxes are balanced by natural losses. As a first step, 50 measurements of LNAPL fluxes through wells from seven field sites are reviewed. All the values were acquired using tracer dilution techniques. The mean and median of the LNAPL flux measurements are 0.15 and 0.064 m/year, respectively. The measured LNAPL fluxes are three to five orders of magnitude less than typical groundwater fluxes. The primary significance of the small magnitude of the LNAPL fluxes relative to groundwater fluxes is that LNAPL discharge to the downgradient body could easily be equal to or less than the natural downgradient LNAPL losses that occur through dissolution into groundwater or evaporation into soil gas. In general no clear correlations are seen between measured LNAPL fluxes and LNAPL thicknesses in wells, lengths to downgradient edges of LNAPL, or the specific gravities (density of LNAPL/ density of water) of the LNAPL. Secondly, a proof-of-concept sand tank experiment is presented. The objective was to resolve if natural LNAPL losses can limit expansion of an LNAPL body given a constant source. An open top glass and stainless steel tank (1 m by 0.5 m by 0.025 m) was filled with uniform coarse sand and water. Water was pumped through the tank producing a water seepage velocity of 0.25 m/day. Methyl tert-butyl ether (MTBE) was added to the tank at constant rates that were step-wise increased five times through a 120 day experiment. In all cases the MTBE body initially expanded followed by subsequent stabilization at a finite length. The key observation was that steady LNAPL pool lengths were achieved with a constant inflow of LNAPL into the system. Lastly, analytical models are developed. The models describe the size of LNAPL bodies and spatial variations in LNAPL fluxes as a function of influent loading, rates of natural losses, and time. Three idealized geometries of LNAPL bodies are considered. These include one dimensional, circular, and oblong. Results indicate LNAPL fluxes decline progressing from the interior to the edges of an LNAPL body. Per the laboratory studies, the solutions show that LNAPL bodies with a constant source reach finite dimensions at large times. Building on this research it seems that a pragmatic goal for management of contiguous LNAPL bodies is attaining a condition where the LNAPL bodies as a whole are stable or shrinking.Item Open Access A method for assessing impacts of parameter uncertainty in sediment transport modeling applications(Colorado State University. Libraries, 2009) Ruark, Morgan D., author; Niemann, Jeffrey D., advisor; Kampf, Stephanie, committee member; Griemann, Blair, committee memberNumerical sediment transport models are widely used to evaluate impacts of water management activities on endangered species, to identify appropriate strategies for dam removal, and many other applications. The SRH-1D (Sedimentation and River Hydraulics - One Dimension) numerical model, formerly known as GST ARS, is used by the U.S. Bureau of Reclamation for many such evaluations. The predictions from models such as SRH-1D include uncertainty due to assumptions embedded in the model 's mathematical structure, uncertainty in the values of parameters, and various other sources. In this paper, we aim to develop a method that quantifies the degree to which parameter values are constrained by calibration data and determines the impacts of the remaining parameter uncertainty on model forecasts. Ultimately, this method could be used to assess how well calibration exercises have constrained model behavior and to identify data collection strategies that improve parameter certainty. The method uses a new multi-objective version of Generalized Likelihood Uncertainty Estimation (GLUE). In this approach, the likelihoods of parameter values are assessed using a function that weights different output variables using their first order global sensitivities, which are obtained from the Fourier Amplitude Sensitivity Test (FAST). The method is applied to SRH-1D models of two flume experiments: an erosional case described by Ashida and Michiue (1971) and a depositional case described by Seal et al. (1997). Overall, the results suggest that the sensitivities of the model outputs to the parameters can be rather different for erosional and depositional cases and that the outputs in the depositional case can be sensitive to more parameters. The results also suggest that the form of the likelihood function can have a significant impact on the assessment of parameter uncertainty and its implications for the uncertainty of model forecasts.Item Open Access A method to downscale soil moisture to fine-resolutions using topographic, vegetation, and soil data(Colorado State University. Libraries, 2014) Ranney, Kayla J., author; Niemann, Jeffrey D., advisor; Green, Timothy R., committee member; Kampf, Stephanie K., committee memberVarious remote-sensing and ground-based sensor methods are available to estimate soil moisture over large regions with spatial resolutions greater than 500 m. However, applications such as water management and agricultural production require finer resolutions (10 - 100 m grid cells). To reach such resolutions, soil moisture must be downscaled using supplemental data. Several downscaling methods use only topographic data, but vegetation and soil characteristics also affect fine-scale soil moisture variations. In this thesis, a downscaling model that uses topographic, vegetation, and soil data is presented, which is called the Equilibrium Moisture from Topography, Vegetation, and Soil (EMT+VS) model. The EMT+VS model assumes a steady-state water balance involving: infiltration, deep drainage, lateral flow, and evapotranspiration. The magnitude of each process at each location is inferred from topographic, vegetation, and soil characteristics. To evaluate the model, it is applied to three catchments with extensive soil moisture and topographic data and compared to an Empirical Orthogonal Function (EOF) downscaling method. The primary test catchment is Cache la Poudre, which has variable vegetation cover. Extensive vegetation and soil data were available for this catchment. Additional testing is performed using the Tarrawarra and Nerrigundah catchments where vegetation is relatively homogeneous and limited soil data are available for interpolation. For Cache la Poudre, the estimated soil moisture patterns improve substantially when the vegetation and soil data are used in addition to topographic data, and the performance is similar for the EMT+VS and EOF models. Adding spatially-interpolated soil data to the topographic data at Tarrawarra and Nerrigundah decreases model performance and results in worse performance than the EOF method, in which the soil data are not highly weighted. These results suggest that the soil data must have greater spatial detail to be useful to the EMT+VS model.Item Open Access A multi criteria decision support system for watershed management under uncertain conditions(Colorado State University. Libraries, 2012) Ahmadi, Mahdi, author; Arabi, Mazdak, advisor; Ascough, James C., II, committee member; Fontane, Darrell G., committee member; Hoag, Dana L., committee memberNonpoint source (NPS) pollution is the primary cause of impaired water bodies in the United States and around the world. Elevated nutrient, sediment, and pesticide loads to waterways may negatively impact human health and aquatic ecosystems, increasing costs of pollutant mitigation and water treatment. Control of nonpoint source pollution is achievable through implementation of conservation practices, also known as Best Management Practices (BMPs). Watershed-scale NPS pollution control plans aim at minimizing the potential for water pollution and environmental degradation at minimum cost. Simulation models of the environment play a central role in successful implementation of watershed management programs by providing the means to assess the relative contribution of different sources to the impairment and water quality impact of conservation practices. While significant shifts in climatic patterns are evident worldwide, many natural processes, including precipitation and temperature, are affected. With projected changes in climatic conditions, significant changes in diffusive transport of nonpoint source pollutants, assimilative capacity of water bodies, and landscape positions of critical areas that should be targeted for implementation of conservation practices are also expected. The amount of investment on NPS pollution control programs makes it all but vital to assure the conservation benefits of practices will be sustained under the shifting climatic paradigms and challenges for adoption of the plans. Coupling of watershed models with regional climate projections can potentially provide answers to a variety of questions on the dynamic linkage between climate and ecologic health of water resources. The overarching goal of this dissertation is to develop a new analysis framework for the development of optimal NPS pollution control strategy at the regional scale under projected future climate conditions. Proposed frameworks were applied to a 24,800 ha watershed in the Eagle Creek Watershed in central Indiana. First, a computational framework was developed for incorporation of disparate information from observed hydrologic responses at multiple locations into the calibration of watershed models. This study highlighted the use of multiobjective approaches for proper calibration of watershed models that are used for pollutant source identification and watershed management. Second, an integrated simulation-optimization approach for targeted implementation of agricultural conservation practices was presented. A multiobjective genetic algorithm (NSGA-II) with mixed discrete-continuous decision variables was used to identify optimal types and locations of conservation practices for nutrient and pesticide control. This study showed that mixed discrete-continuous optimization method identifies better solutions than commonly used binary optimization methods. Third, the conclusion from application of NSGA-II optimization followed by development of a multi criteria decision analysis framework to identify near-optimal NPS pollution control plan using a priori knowledge about the system. The results suggested that the multi criteria decision analysis framework can be an effective and efficient substitute for optimization frameworks. Fourth, the hydrologic and water quality simulations driven by an extensive ensemble of climate projections were analyzed for their respective changes in basin average temperature and precipitation. The results revealed that the water yield and pollutants transport are likely to change substantially under different climatic paradigms. And finally, impact of projected climate change on performance of conservation practice and shifts in their optimal types and locations were analyzed. The results showed that performance of NPS control plans under different climatic projections will alter substantially; however, the optimal types and locations of conservation practices remained relatively unchanged.Item Open Access A multi-objective community-level sesimic retrofit optimization combining social vulnerability with an engineering framework for community resiliency(Colorado State University. Libraries, 2015) Jennings, Elaina N., author; van de Lindt, John W., advisor; Atadero, Rebecca, committee member; Mahmoud, Hussam, committee member; Peek, Lori, committee memberThis dissertation presents a multi-objective optimization framework for community resiliency by providing decision maker(s) at the local, state, or other government level(s) with an optimal seismic retrofit plan for their community's woodframe building stock. A genetic algorithm was selected to perform the optimization due to its robustness in multi-objective problem solving. In the present framework, the algorithm provides a set of optimal community-level retrofit plans for the woodframe building inventory based on the socio-demographic characteristics of the focal community, Los Angeles, California. The woodframe building inventory was modeled using 37 archetypes designed to several historical and state-of-the-art seismic design provisions and methodologies. The performance of the archetypes was quantified in an extensive numerical study using nonlinear time history analysis. Experimental testing was conducted at full scale on a three-story soft-story woodframe building. The experimental testing investigated the seismic performance of several retrofit strategies for use in the framework, and the results were used in development of a metric correlating inter-story drift limits with damage states used in the framework. A performance-based retrofit design is presented in detail, and the experimental testing results of four retrofits are provided as well. The algorithm uses each archetype's seismic performance to identify the set of optimal community-level retrofit plans to enhance resiliency by minimizing four objectives: initial cost, economic loss, number of morbidities, and recovery time. In the model, initial cost sums the cost of each new retrofit, economic loss incorporates direct and indirect costs; the number of morbidities includes injuries, fatalities, and persons diagnosed with post-traumatic stress disorder (PTSD); and a recovery time is estimated and may be used to represent the loss in quality of life for the affected population. The framework was calibrated to the estimated losses from the 1994 Northridge earthquake. An application of the framework is presented using Los Angeles County as the community. Two forecasted populations are also examined using the census data for Daly City, California and East Los Angeles to further exemplify the framework. Analyses were conducted at six seismic intensities. In all illustrative examples, the total financial loss (e.g., initial cost + economic loss) was higher for the initial population (i.e. un-retrofitted community). When combining this financial savings with the reduced number of morbidities, it is clear that the higher initial cost associated with retrofitting the woodframe building stock greatly outweighs the risks and losses associated with not retrofitting. The results also demonstrated how retrofitting the existing woodframe building stock greatly reduces estimated losses, especially for very large earthquakes. The resulting losses were further investigated to demonstrate the important role that the mental health of the population plays in a community's economy and recovery following disastrous events such as earthquakes. Overall, the results clearly demonstrate the necessity in including social vulnerability when assessing or designing for community-level resiliency for a seismic hazard.Item Open Access A new hurricane impact level ranking system using artificial neural networks(Colorado State University. Libraries, 2015) Pilkington, Stephanie F., author; Mahmoud, Hussam, advisor; van de Lindt, John, committee member; Schumacher, Russ, committee memberTropical cyclones are intense storm systems that form over warm water but have the potential to bring multiple related hazards ashore. While significant advancements have been made for forecasting of such extreme weather, the estimation for the resulting damage and impact to society is significantly complex and requires substantial improvements. This is primarily due to the intricate interaction of multiple variables contributing to the socio-economic damage on multiple scales. Subsequently, this makes communicating the risk, location vulnerability, and the resulting impact of such an event inherently difficult. To date, the Saffir-Simpson Scale, based off of wind speed, is the main ranking system used in the United States to describe an oncoming tropical cyclone event. There are models currently in use to predict loss by using more parameters than just wind speed. However, they are not actively used as a means to concisely categorize these events. This is likely due to the scrutiny the model would be placed under for possibly outputting an incorrect damage total. These models use parameters such as; wind speed, wind driven rain, and building stock to determine losses. The relationships between meteorological and locational parameters (population, infrastructure, and geography) are well recognized, which is why many models attempt to account for so many variables. With the help of machine learning, in the form of artificial neural networks, these intuitive connections could be recreated. Neural networks form patterns for nonlinear problems much as the human brain would, based off of historical data. By using 66 historical hurricane events, this research will attempt to establish these connections through machine learning. In order to link these variables to a concise output, the proposed Impact Level Ranking System will be introduced. This categorization system will use levels, or thresholds, of economic damage to group historical events in order to provide a comparative level for a new tropical cyclone event within the United States. Discussed herein, are the effects of multiple parameters contributing to the impact of hurricane events, the use and application of artificial neural networks, the development of six possible neural network models for hurricane impact prediction, the importance of each parameter to the neural network process, the determination of the type of neural network problem, and finally the proposed Impact Level Ranking System Model and its potential applications.Item Open Access A nonlinear synthetic unit hydrograph method that accounts for channel network type(Colorado State University. Libraries, 2018) Czyzyk, Kelsey A., author; Niemann, Jeffrey D., advisor; Gironás, Jorge, committee member; Ronayne, Michael J., committee memberStormflow hydrographs are commonly estimated using synthetic unit hydrograph (UH) methods, particularly for ungauged basins. Current synthetic UHs either consider very limited aspects of basin geometry or require explicit representation of the basin flow paths. None explicitly considers the channel network type (i.e., dendritic, parallel, pinnate, rectangular, and trellis). The goal of this study is to develop and test a nonlinear synthetic UH that explicitly accounts for the network type. The synthetic UH is developed using kinematic wave travel time expressions for hillslope and channel points in the basin. The effects of the network structure are then isolated into two random variables whose distributions are estimated based on the network type. The proposed method is applied to ten basins from each classification and compared to other related methods. The results suggest that considering network type improves the estimated UHs with the largest improvements seen for dendritic, parallel, and pinnate networks.Item Open Access A novel direct shear apparatus to evaluate internal shear strength of geosynthetic clay liners for mining applications(Colorado State University. Libraries, 2016) Soleimanian, Mohammad R., author; Bareither, Christopher A., advisor; Shackelford, Charles D., committee member; Schaeffer, Steven L., committee memberThe use of geosynthetic clay liners (GCLs) in engineering practice has grown extensively over the past three decades due to application of this material containment applications such non-hazardous solid waste, residential and commercial wastewater management, roadways, and other civil engineering construction projects. This growth has been supported by an enhanced understanding of the engineering properties of GCL as well as hydraulic and mechanical behavior for different applications. In particular, the internal shear strength of GCLs is an important design consideration since GCLs often are installed on sloped surfaces that induced internal shear and normal stresses. The objective of this study was to develop a direct shear testing apparatus to measure the internal shear strength of GCLs for use in mining applications. The direct shear apparatus was designed to support the following testing conditions for needle-punched reinforced GCLs: hydration and testing in non-standard solutions (e.g., pH ≤ 1 or pH ≥ 12); testing under high normal stresses (up to 2000 kPa); and testing at elevated temperatures (up to 80 °C). Ultra-high molecular weight polyethylene GCL shear boxes were developed to facilitate testing 300-mm-square and 150-mm-square specimens under displacement-controlled conditions. Experiments were conducted on 150-mm-square and 300-mm-square GCL specimens to (i) evaluate gripping surface effectiveness as a function of peel strength and normal stress, (ii) assess hydration procedures to adopt into a systematic shear-testing protocol, (iii) assess stress-displacement behavior for 150-mm and 300-mm GCL shear tests, and (iv) develop failure envelopes for peak shear strength (τp) and large-displacement (τld). Shear behavior and peak and large-displacement shear strengths measured on both 150-mm and 300-mm square GCL specimens compared favorably to one another as well as to data from a previous study on a similar GCL. These comparisons validated the direct shear apparatus developed in this study and support the use of small GCL test specimens to measure internal shear behavior and shear strength of reinforced GCLs. Furthermore, the pyramid-tooth gripping plates developed to transfer shear stress from the interfaces between geotextiles of the GCL and shear platens to the internal region of a GCL were effective for a needle-punched GCL with peel strength of 2170 N/m and at normal stress ≥ 100 kPa.