Theses and Dissertations
Permanent URI for this collection
Browse
Browsing Theses and Dissertations by Issue Date
Now showing 1 - 20 of 661
Results Per Page
Sort Options
Item Open Access Finite element 2-D transport model of groundwater restoration for in situ solution mining of uranium(Colorado State University. Libraries, 1981) Warner, James W., author; Sunada, D. K., advisor; Longenbaugh, Robert A., committee member; Morel-Seytoux, H. J., committee member; Ethridge, Frank G., committee member; McWhorten, David B., committee memberDeveloping technologies such as in situ solution mining of uranium represent a new, more complex solute transport problem in site restoration than traditional transport problems such as contaminant migration. The method consists of injecting through wells a lixiviant into the host aquifer containing the uranium. The uranium is preferentially dissolved and the uranium-bearing groundwater is recovered through pumping wells. The environmental advantages of solution mining over conventional mining techniques are several; however, it has the disadvantage of potentially contaminating the groundwater system. A computer model of groundwater restoration for the in situ solution mining of uranium is developed and documented. The model is based on the Galerkin-finite element method using triangular elements and linear shape functions. The computer model calculates the dual changes in concentration of two reacting solutes subject to binary cation exchange in flowing groundwater. This cation exchange process is important in the groundwater restoration of solution mining. Both the concentration in solution and the concentration adsorbed on the solid aquifer material are calculated for both solutes at specified places and times due to the process of convective transport, hydrodynamic dispersion, mixing from fluid sources and cation exchange. No other reactions are assumed which would affect the solution concentrations. The model also has the capacity to simulate conservative solute transport. A complete documentation of the computer model and a detailed description of the numerical solution of both the groundwater flow equation and the solute-transport equations are presented. The model was successfully applied to an actual field problem of ammonium restoration for a pilot scale uranium solution mining operation in northeast Colorado near the town of Grover. The computer model is offered as a basic working tool that should be readily adaptable to many other field problems. The model should have wide applicability by regulating agencies, mining companies and others concerned with groundwater restoration for in situ solution mining.Item Open Access Water quality assessment with routine monitoring data(Colorado State University. Libraries, 1982) Smillie, Gary M., author; Sanders, Thomas G., advisor; Ward, Robert C., committee member; Loftis, Jim C., committee memberFederal legislation in recent years has required the states to develop water quality management programs which include stream standards and river monitoring. Water quality data, routinely collected by state and federal agencies, has often been of little use in directly determining stream standards compliance. This problem is due to the discrepancy between the statistical nature of water quality sampling and nonstatistically expressed stream standards. However, the use of probability and statistical models in water quality analysis may pro-vide useful assessments of river water quality with stream standards. This research consists of the development and testing of five statistical procedures which allow river water quality to be assessed from available, routinely collected data. The procedures include: 1) probability density function modeling of water quality variables, 2) multiple linear regression modeling of water quality variables, 3) conditional probability modeling of stream standard violations given known river conditions, 4) a water quality index indicating changes in water quality, and 5) a water quality index indicating compliance/non- compliance of water quality variables with stream standards. The utility of each procedure is illustrated with a case study.Item Open Access Dispersion in bi-modal oil shales(Colorado State University. Libraries, 1982) Bryant, Mark A., author; McWhorter, David B., advisor; Sunada, Daniel K., committee member; Ward Robert C., committee memberA series of leaching column experiments were conducted using 3 different grain sizes of spent oil shale from the Paraho retorting process. Electrical conductivity breakthrough data produced at 3 different seepage velocity rates were analyzed with the help of a least squares curve fitting computer model, CFITIM, developed by Van Genuchten (1981). Emphasis was placed on the identification of transport mechanisms which could explain the observed asymetry of the breakthrough curves. Comparison of the column breakthrough curves to a analytical dispersion model which took into account a micro pore diffusion transfer mechanism, produced poor correlation. When a linear sorption transfer mechanism was coupled with a micro pore diffusion transfer mechanism in the analytical model a much better match of the breakthrough data was obtained. The analytical model may prove useful in the development of a standard leaching column test procedure, however, it is suspected that the model parameters have little physical significance and therefore can only be used in fitting the breakthrough curves.Item Open Access Known discharge uncoupled sediment routing(Colorado State University. Libraries, 1982) Brown, Glenn O. (Glenn Owen), author; Simons, Daryl B., advisor; Li, Ruh-Ming, advisor; Doehring, Donald O., committee memberA known discharge, uncoupled, sediment routing model, KUWASER has been developed. The model sequentially solves the steady flow and sediment continuity equations. This procedure allows for efficient solution of sediment routing problems on large river systems. The model can perform backwater calculations and sediment routing in main stem and multiple tributaries including divided flow reaches. The user can determine river response to river management practices such as channel improvement, realignment, dredging or tributary modifications. The model was tested against two other models, a stage-discharge relationship and a fixed bed model by comparing the frequency of model errors in stage prediction. A sensitivity analysis was performed to determine the sensitivity of the models results to variations in six input parameters. The Yazoo River Basin in Mississippi was used as a case study to demonstrate the model capabilities. The model can be an effective tool in the prediction of river response.Item Open Access Seismicity of Libya and related problems(Colorado State University. Libraries, 1983) Hassen, Hassen A., authorThe seismicity of Libya was investigated. Available data of earthquakes, which have occurred in or near Libya during the period 262 A.D. to 1982, have been collected. These data together with geological information are used to investigate the nature of seismic activity and its relationship to the tectonics of the country. Statistical analysis is used to calculate the frequency-magnitude relation for the data in the period from 1963 to 1982. The results indicate that about 140 earthquakes will equal or exceed a Richter magnitude of 5 every 100 years, and one earthquake will equal or exceed a Richter magnitude of 7 every 100 years. The whole country is characterized by low to moderate levels of seismic activity but some segments have experienced large earthquakes in this century and earlier. On the basis of observed and expected seismicity, a four-fold subdivision is suggested defining the activity of the different parts of the country. The highest activity is found to be concentrated in Cyrenaica (northeastern region) and around the Hun graben (north central region). The southern part of Libya is considered to be seismically stable. Problems encountered when investigating and predicting future seismicity are discussed. The principal problem is the absence of seismic monitoring stations in the country.Item Open Access Solute transport in overland flow during rainfall(Colorado State University. Libraries, 1985) Peyton, R. Lee, Jr., author; Sanders, Thomas G., advisor; Adrian, Donald Dean, committee member; Smith, Roger E., committee member; Shen, Hsieh W., committee member; Ward, Robert C., committee memberA numerical model was developed to simulate the movement of a conservative solute in steady overland flow over a smooth impervious plane under a constant rainfall intensity. This movement was described by shear-flow convection, vertical mixing, and rainfall dilution. Mass was converted in flow layers whose velocities varied according to velocity profile relationships developed in this study. Vertical diffusion occurred between flow layers according to the Fickian equation. Mass was diluted due to increasing depth of flow downstream. This model closely reproduced results of several analytical solutions for solute transport in steady, uniform flow. The model was then calibrated to results from overland flow laboratory experiments using the vertical mixing coefficient, Ɛ ƴ, as a calibration parameter. A regression analysis was used to relate the calibrated Ɛ ƴ values to rainfall and flow variables. The resulting regression equation showed that Ɛ ƴ increased with increasing rainfall intensity and with decreasing mean flow velocity. Ɛ ƴ varied the greatest at low rainfall intensities and near the top of the overland flow plane. The lower range of the calibrated Ɛ ƴ values compared favorably with the molecular diffusion coefficient for the dye tracer used in the laboratory experiments, while the upper range was similar to theoretical vertical mixing coefficients for steady, uniform, turbulent flow at equivalent discharges. It was concluded that the velocity of the peak concentration can vary between the mean cross-sectional velocity and the maximum point velocity, depending on the Ɛ ƴ value. It was further concluded that rainfall generally does not produce a continuous state of complete vertical mixing in overland flow. The study was then taken one step further by using the resulting Ɛ ƴ equation to examine the length of the convective distance beyond which Taylor's one-dimensional dispersion analogy is generally valid. This distance was found to be very short where vertical mixing was great and very long where vertical mixing was small. In addition, the Ɛ ƴ equation was used along with Fischer's theoretical expression for the one-dimensional dispersion coefficient for open-channel flow (based on Taylor's research with pipe flow) to compute dispersion coefficients for overland flow during rainfall. In some cases, negative dispersion coefficients were computed. In further checking the applicability of Fischer's expression, it was concluded that it is not appropriate for all velocity profiles.Item Open Access Resistance components and velocity distributions of open channel flows over bedforms(Colorado State University. Libraries, 1985) Fehlman, Henry Michael, author; Shen, H. W., advisorThe components of flow resistance and the velocity distributions of open-channel flows over bedforms were investigated by conducting idealized bedform experiments in a laboratory flume. Experiments involving uniform smooth, uniform rough, and nonuniform smooth bedform elements were performed. Local shear stress and pressure on the bedform surface was measured in order to determine the characteristics of skin and form resistance. Total flow resistance was measured directly and compared to the sum of the resistance components. Using the results of this study and published data, relations were developed to enable the prediction of each resistance component. Boundary layer velocity profiles were measured over the bedform surface in the region of reattached flow. An average velocity distribution was developed based on free stream velocity measurements made over the length of the modeled bedform.Item Open Access Solute transport by a volatile solvent(Colorado State University. Libraries, 1987) Brown, Glenn O. (Glenn Owen), author; McWhorter, David B., advisor; Nelson, John D., advisor; Warner, James W., committee member; Durnford, Deanna S., committee memberReclamation and impact analysis of retorted oil shale piles will require prediction of water and solute transport rates over the entire solution content range, down to and including the relatively dry region. In such dry materials, vapor transport of water affects the transport of solutes. Experimental measurements of transport coefficients in relatively dry oil shale have brought forward longstanding questions concerning the mechanics of combined liquid-vapor flow. Principal among these is the apparent inability of porous media to transport solutes at low solution contents. In an attempt to ensure proper interpretation of experimental data, a new theory of solute transport by combined liquid-vapor flow has been developed, and new analytical solutions for transient flow have been obtained. The solutions show that the relative magnitudes of the separate transport coefficients produce many of the flow features seen in experimental data, and significant liquid transport can occur in regions without apparent solute transport. This development is new and represents an addition to the understanding of solute transport. These methods and results can be applied to other problems in multiple phase transport, such as hazardous waste disposal, mine reclamation, and soil leaching.Item Open Access A flood frequency derivation technique based on kinematic wave approach(Colorado State University. Libraries, 1987) Cadavid, Luis Guillermo, author; Obeysekera, J. T. B., advisor; Salas, Jose D., committee member; Schumm, Stanley Alfred, 1927-, committee memberThe present study deals with the derivation of a methodology to obtain a flood frequency distribution, for small ungaged watersheds, where the overland flow phase is considered to be an important timing component. In the hydrological literature, this technique comprises three components: a rainfall infiltration model, an effective rainfall-runoff model and the probabilistic component. The study begins with a review of the Geomorphological Instantaneous Unit Hydrograph (GIUH), in order to establish its applicability to the aforementioned type of watersheds. Some effective rainfall-runoff models currently used in the practice of hydrology, like the GIUH and models based on Kinematic Wave approach, lack the required features or do not consider all possible responses within the watershed. Therefore, a new model is developed and calibrated, based on Kinematic Wave approach, for a first order stream with two symmetrical lateral planes. The model is conformed by analytical and approximate solutions, the latter improved via regression analysis. The formulated model is used along with a statistical distribution for the effective rainfall intensity and effective duration, in order to derive the flood frequency distribution technique through the probabilistic component. The structure of the equations considered in the different components imposes a numerical algorithm to compute the flood frequency distribution curve for a given watershed. The derived technique is proved for hypothetical and real watershed configurations, showing its capability to forecast flood frequency curves for ungaged watersheds and to account for the influence of parameters on the physics of flood formation. Actual watersheds are conceived as first order streams with two symmetrical planes.Item Open Access Stochastic modeling of seasonal streamflow(Colorado State University. Libraries, 1987) Mendonça, Antonio Sergio Ferreira, author; Salas, Jose D., advisor; Fontane, Darrell G., committee member; Loftis, Jim C., committee member; Gessler, Johannes, committee memberThis research examines topics on seasonal (monthly, bimonthly, etc.) hydrologic time-series modeling. A family of periodic models was derived by allowing parameters for a particular Multiplicative Autoregressive Integrated Moving Average model (Multiplicative ARIMA) to vary from season to season. The derived model presents parameters relating data for seasons in the same year and parameters relating data for the same season for consecutive years. PARMA models are particular cases of the proposed model, here called Multiplicative Periodic Autoregressive Moving Average (Multiplicative PARMA). Least-squares estimation based on the Powell algorithm for nonlinear optimization was developed for determining the model parameters. Properties such as seasonal variances and autocorrelations were derived analytically for particular cases of the general model. Analysis of sensitivity of the annual autocorrelograms to the parameters of the model showed that the yearly autoregressive parameters are the most important for the reproduction of high annual autocorrelations. Tests of model were made through data generation. The model was applied to four-and six-season series for river discharge presenting distinct characteristics of variabilty and dependence. Tests for goodness-of-fit and selection criteria of models for seasonal series were also discussed. Results from data generation indicate that the estimation procedure is able to estimate parameters for the Multiplicative PARMA models and can also be used for refinement of estimations made by method-of-moments for other models. Application to discharge data from St. Lawrence, Niger, Elkhorn and Yellowstone rivers showed that the proposed modeling technique is able to preserve long term dependence better than models currently used in practical hydrology. Direct consequence of this improvement is better reproduction of floods and droughts and more accuracy in the design and operation of water resource structures.Item Open Access Evaluation of alternative design flow criteria for use in effluent discharge permitting(Colorado State University. Libraries, 1987) Paulson, Cynthia L., author; Sanders, Thomas G., advisor; Ward, Robert C., committee member; Evans, Norman A., committee memberA design flow Is the value used to represent upstream or dilution flow in the calculation of effluent permit limits under the National Pollutant Discharge Elimination System (NPDES). The7Q10 low-flow statistic, or the 7-day moving average low flow that occurs once every ten years on the average, has traditionally been used for design flows. Recently, alternative approaches to conventional NPDES permitting techniques have been Investigated. The objective of this study was to research alternative design flows that would maximize use of the assimilative capacity of receiving waters while also maintaining water uses. This study addressed two major aspects of design flows. The first was the definition of a set of recommended methods to use in the calculation of design flows. These methods were gathered from the literature or developed in the process of the study. The second aspect was a comparison of alternative design flows and recommendation of guidelines to use in selecting appropriate values. Data at eight sites on streams along the Front Range of Colorado were analyzed. Three types of analysis were applied to define design flows for acute and chronic conditions. Traditional frequency/duration statistics were calculated on an annual, monthly and seasonal basis. An empirical, distribution-free approach developed by the U.S. EPA, called the biologically-based method, was also applied. A simplified version of this method, termed excursion analysis, was developed to augment the Information supplied by the biologically-based approach. Design flows calculated with these methods were related to acute or chronic durations and allowable frequency criteria recommended by the U.S. EPA for the protection of aquatic life. The results of the research highlighted the need to establish a standard set of methods to use in the calculation of design flows. Some methods were recommended while other areas requiring more research were pointed out. The lack of long flow data records above effluent discharge points, where NPDES permit limits are calculated, is a major problem. The results of the flow analysis showed that distribution-based frequency statistic flows do not relate as directly to aquatic life criteria as biologically-based design flows. The level of protection provided by frequency statistic flows varies widely from site to site, while biologically-based flows provide relatively consistent levels of protection. Seasonal or monthly design flows can be used to Increase the use of stream assimilative capacity. However, it was shown that the number of excursions, or flows below a given design flow, was substantially higher for monthly frequency statistic flews than annual values. The implication of this result Is that more stringent design flows may be required on a monthly basis, depending on the seasonal needs of aquatic life populations, and other water uses. The critical importance of design flow criteria, based on aquatic life protection, was emphasized in this study. Once criteria have been chosen, the selection of an appropriate annual design flow is relatively straightforward. The biologically-based method, or a similar approach that relates directly to use criteria, is recommended as a better alternative than the 7Q10. Seasonal or monthly flows are also recommended, but will require further research.Item Open Access Electrohydrodynamic flow in a barbed plate electrostatic precipitator(Colorado State University. Libraries, 1988) McKinney, Peter J., author; Davidson, Jane H., advisor; Wilbur, Paul J., committee member; Sandborn, Virgil A., committee member; Meroney, Robert N., committee memberThe large scale secondary flows and turbulence induced by the inhomogeneous negative corona discharge in the conventional wire-plate precipitator are known to reduce collection efficiencies, particularly in applications with high mass loadings of fine particulates. Electrohydrodynamic theory suggests that a modification in electrode geometry is necessary to control the electrically induced flow. A plate-plate precipitator using a barbed plate discharge electrode is designed to provide a more uniform current density distribution. Electrical and fluid dynamic characteristics of four model barbed plate electrodes, with varying plate-to-plate and barb spacing, are evaluated and compared to characteristics of a laboratory wire-plate precipitator in a specially designed wind tunnel facility. Current voltage characteristics of each electrode are presented and the visual appearance of the corona discharge discussed. Hot-film anemometer measurements of the turbulent flow field downstream of the active precipitator include mean and turbulence intensity profiles, as well as spectral analysis of the flow. Gas eddy diffusivities are estimated from integral length scale calculations. A laser light sheet is used to visualize the flow in the inter-electrode space. Results show that the electrical characteristics of the planar electrodes are well within the range needed for industrial precipitation and that the scale of the current in homogeneities within the precipitator are reduced. Fluid dynamic measurements confirm that electrode geometry has a significant effect on the electrohydrodynamic turbulence production. Turbulence intensity data indicate that the point discharges in the planar geometry cause higher turbulence levels than the wire discharges. Turbulent diffusivites are correspondingly higher in the planar geometry. These results indicate that mixing may actually be enhanced in the suggested design. Flow field measurements made downstream of the precipitator may not however be representative of the electrically induced flow within the precipitator. Plate end effects observed in the visualization procedure may have a significant effect on the downstream flow and bias the measurements. Additional study is necessary to determine if the planar geometry is a viable design. The most important test of any new precipitation design is measurement of its particle collection efficiency.Item Open Access Time and scale effects in laboratory permeability testing of compacted clay soil(Colorado State University. Libraries, 1989) Javed, Farhat, author; Shackelford, Charles D., advisor; Jameson, Donald A., advisor; Doehring, Donald O., committee member; Abt, Steven R., committee memberPermeability (hydraulic conductivity) testing of clays in the laboratory typically requires a significant amount of time. It is hypothesized that the time required for clay permeability test can be reduced substantially through a statistical modelling technique known as "time series analysis". In order to test this hypothesis, permeability tests were performed on compacted samples of a silty clay soil in a standard Proctor mold (9.4 x 10-4 m3). The soil was separated into five different fractions representing five ranges in precompaction clod sizes. Constant-head permeability tests were performed on each of these five fractions. Tests were replicated five times for the time series analysis. The results of analysis indicate that time series modelling can significantly reduce statistical error associated with permeability data. It is demonstrated that the time required for clay permeability test can be reduced appreciably through time series modelling. Permeability tests also were performed on four soil fractions in a large-scale (0.914 m x 0.914 m x 0.457 m) double-ring, rigid-wall permeameter. The results of small-scale (Proctor mold) permeability tests indicate that the soil permeability does not vary much with a change in the precompaction clod size. Presence of large clods (> 25 mm), however, may result in side-wall leakage. The large-scale tests indicated that permeability is strongly related to the precompaction clod sizes. Permeability of the soil increased more than two orders-of-magnitude as the maximum precompaction clod size increased from 4.75 mm to 75 mm. Comparison of the results from the small-scale and the large-scale tests indicated that, for all soil fractions, the large-scale permeability was higher by more than an order-of-magnitude. As a result, there appears to be a scale-effect associated with laboratory permeability testing. This scale effect is more significant when soil contains considerable quantity of clods that are large relative to the size of permeameter. These results imply that the large-scale test is more capable of accounting for the hydraulic defects resulting from large clods. A more realistic evaluation of the field permeability of a compacted clay, therefore, may be possible in the laboratory if the permeameter is fairly large relative to the maximum precompaction size of clods present under field conditions.Item Open Access Selection of modeling and monitoring strategies for estuarine water quality management(Colorado State University. Libraries, 1989) Sivakumaran, Kumaraswamy, author; Grigg, N. S., advisor; Ward, Robert C., advisor; Sanders, T. G., committee member; Fontane, Daniel G., committee memberEstuarine water quality management is challenging owing to the complex hydrodynamics, water quality kinetics, international and domestic legislation, and human impact that take place in an estuarine environment. Scientists respond to this challenge by 'observing, hypothesizing and predicting' the behavior of the estuary. This is accomplished via developing water quality models which are idealizations of the behavior and by water quality monitoring. As most of the water quality models were developed for research, they did not serve the purposes of management. Because scientific methods were not widely known, the direction by Congress to collect water quality data led to non-scientific methods of collecting data. The research (with the objective of using water quality models effectively) embarked on designing a water quality monitoring system using a model. A model based on the hypothesis of conservation of mass was expressed as a one dimensional convective diffusion equation. The convective-diffusion equation was then solved recursively. Field observations from the Potomac estuary were obtained from government agencies and reports. An algorithm developed by Kalman was used to combine the model predictions and field measurements. In order to design the monitoring system the term 'TRACE OF ESTUARY' (TOE) was defined. The relative value of TOE determined the optimum number of sampling locations for an ongoing water quality monitoring program. The approach resulted in the reduction of sampling locations in the Potomac estuary from 12 to 5. It also showed that water quality data must be representative of similar sized segments. The concept of using the physical behavior of the system to design a water quality monitoring network was established. It was further established that the use of "better and accurate" models (not necessarily complex models) will reduce the number of sampling points. The significance of the research is that: (i) modeling and monitoring are used in an integrated fashion; (ii) a scientific approach is used to determine the number of sampling locations; and (iii) an accurate model will lead to a reduction in the sampling locations necessary for water quality management.Item Open Access Modeling the stream temperature regime of the East Fork of the Virgin River in Zion National Park(Colorado State University. Libraries, 1991) Peterson, Karen L., author; Sanders, Thomas G., advisor; Hendricks, David W., committee member; Parce, Stanley L., committee member; Ward, Robert C., committee memberThe following stream temperature study was conducted as part of a general study by the Water Rights Branch, Water Resources Division, National Park Service, to evaluate the physical habitat of the aquatic organisms within Zion National Park (ZION). Stream temperature is an aquatic habitat characteristic that is known to be a controlling variable in the successful existence of the Virgin spinedace (Espinosa, 1978). The Virgin River spinedace, a non-game fish which is endemic to the East Fork of the Virgin River, was delineated as the target organism as it has been recommended for classification as threatened (50 F.R. 37959). The first objective of the study was to measure and describe existing stream temperatures of the East Fork of the Virgin River at Virgin River Mile (VRM) 157.3. Diurnal fluctuations in the stream temperature of 10°C were common. The average maximum, mean, and minimum stream temperatures for the study period were 26.7°C, 21.8°C, and 17.0°C, respectively during which the average flow was 1076 l/s. A second objective of the study was to predict the response of the daily fluctuations and mean daily stream temperature at VRM 157.3 to perturbations in stream temperature and discharge at the upstream (eastern) Zion National Park boundary. Stream, shading, and site characteristic data were collected along a 9.3 km reach on the East Fork and input into TEMP-84, a stream temperature model, for simulation of existing and perturbed flows of 283 l/s (10 cfs), 566 l/s (20 cfs), 2,124 l/s (75 cfs), 2,832 l/s (100 cfs), 14,160 l/s (500 cfs), and 28,320 l/s (1000 cfs). Perturbed inflow temperature conditions were delineated as equal to the average ambient temperature and groundwater temperature. Modeled results were evaluated in terms of the relative change in maximum, mean, and minimum stream temperature from that modeled for existing conditions. The relative change was then applied to measured stream temperatures to estimate stream temperatures for the selected hypothetical condition. Results from the modeling exercise demonstrated sharply dampened diurnal fluctuations at VRM 157.3 from an average of 10.1°C under existing conditions to 4.7°C as the flow increased to 2,832 l/s. As the flow was increased beyond 2,832 l/s, the diurnal fluctuation at VRM 157.3 decreases further and approached that of VRM 163.1 at the upstream end of the study reach. Mean stream temperatures at VRM 157.3 decreased by an average of 2.4°C as the flow increased to 14,160 l/s. Flows less than baseflow simulated dramatically increased diurnal fluctuations; diurnal fluctuations of 17.3°C were simulated for flows of 283 l/s. Mean stream temperatures increased by an average of 1.5°C when inflow was decreased to 283 l/s. Hypothetical inflow temperature simulations depicted a clear shift in the diurnal fluctuation at VRM 157.3 in the direction of the change in inflow stream temperature at VRM 163.1. Mean stream temperatures increased by an average of 4.6°C when inflow was equal to the average ambient temperature and decreased by an average of 2.0°C when inflow was equal to groundwater temperature.Item Open Access Disaggregation of precipitation records(Colorado State University. Libraries, 1991) Cadavid, Luis Guillermo, author; Salas, Jose D., advisor; Boes, Duane C., committee member; Yevjevich, Vujica M., 1913-, committee member; Fontane, Darrell G., committee memberThis investigation is related to temporal disaggregation of precipitation records. The objective is to formulate algorithms to disaggregate precipitation defined at a given time scale into precipitation of smaller time scales, assuming that a certain mechanism or stochastic process originates the precipitation process. The disaggregation algorithm should preserve the additivity property and the sample statistical properties at several aggregation levels. Disaggregation algorithms were developed for two models which belong to the class of continuous time point processes: Poisson White Noise (PWN) and Neyman-Scott White Noise (NSWN). Precipitation arrivals are controlled by a counting process and storm activity is represented by instantaneous amounts of precipitation (White Noise terms). Algorithms were tested using simulated samples and data collected at four precipitation stations in Colorado. The PWN model is the easiest and formulation of the disaggregation model was successful. The algorithm is based on the distribution of the number of arrivals (N) conditional on the total precipitation in the time interval (Y) , the distribution of the White Noise terms conditional on N and Y, and the distribution of the arrival times conditional on N. Its application to disaggregate precipitation is limited due to its lacl; of serial correlation. However, PWN disaggregation model performs well on PWN simulated samples. The NSWN is more complex. Required distributions are the same as for the PWN model. Formulation of a disaggregation algorithm was based on theoretical and empirical results. A procedure for model parameters estimation based on weighted least squares was implemented. This procedure reduces the number of estimation failures as compared to method of moments. NSWN disaggregation model performed well on simulated and recorded samples given that parameters used are similar to those controlling the process at the disaggregation scale. The main shortcoming is the incompatibility of parameter estimates at different aggregation levels. This renders the disaggregation model of limited application. Examination of variation of parameter estimates with the aggregation scale suggests the existence of a region where estimated values appear to be compatible. Finally, it is shown that the use of information at a nearby precipitation station with similar precipitation regime may improve parameter values to use in disaggregation.Item Open Access Comparison of direct shear and triaxial tests for measurement of shear strength of sand(Colorado State University. Libraries, 1991) Rahman, Jamshed, author; Nelson, John D., advisor; Siller, Thomas J., committee member; Mielke, Paul W., committee memberTo ascertain the shear strength parameters of soils for engineering purposes is fundamental to soil mechanics and basic for designing earth-bearing and earth-retaining structures. Direct shear and triaxial tests are the most popular laboratory methods to determine these parameters. The direct shear test is used widely because it is simple and quick. The test has several disadvantages, however. The non-uniform stress-strain behavior, the rotation of principal planes during the test, and the imposition of the failure plane are chief among them. The triaxial test was designed as a possible alternative that eliminates some of these disadvantages. Direct shear test results are always comparable to those of the triaxial test; the difference usually is negligible from a practical point of view. Researchers have tried to unfold the intricacies involved in the direct shear test especially the complicated stress-strain behavior that a soil experiences during this test. Data, however, are lacking that determine the difference and establish a correlation between the results of the two tests. This study compares the two tests for measurement of shear strength parameters of sand. Triaxial and direct shear tests were performed on silica sand under the same density and normal stress conditions. Five sets of triaxial tests and 20 direct shear tests each were performed using four different makes of direct shear machines. The results of the direct shear tests were compared with those of the triaxial tests considering the latter as benchmarks. The possible effect of the structural features of the direct shear equipment on results was briefly studied. The results showed that the shear strengths from direct shear tests are higher than those from the triaxial tests. All four direct shear machines gave cohesion values different from each other and higher than the benchmark value. The Soiltest and Wykeham Farrance machines gave almost the same friction angle that was higher than the benchmark value by 4 degrees. The friction angle value from the ELE machine was higher by 2.7 degrees while those from Clockhouse machine were lower by 4.5 degrees as compared to the benchmark value.Item Open Access Current distribution and particle motion in a barbed plate electrostatic precipitator(Colorado State University. Libraries, 1993) McKinney, Peter J., author; Davidson, Jane H., advisor; Gessler, Johannes, committee member; Wilbur, Paul J., committee member; Meroney, Robert N., committee memberElectrohydrodynamic theory suggests that a modification in electrode geometry is a method of creating more favorable electrical and flow conditions in electrostatic precipitators. A novel barbed plate precipitator is designed to provide a more uniform current density distribution and electric field in the inter-electrode gap. Ground plate current densities of both a conventional wire-plate precipitator and the optimized barbed plate precipitator are compared. Particle motion is observed via a laser light-sheet and measured with a laser Doppler anemometer. Streamwise and transverse mean and fluctuating particle velocities, particle motion length scales and diffusivities are measured at electrical and flow conditions typical of industrial precipitators. Ground plate particle collection patterns are photographed. Results show a hexagonal arrangement of barbs provides a more uniform current density distribution and electric field than exist in the wire-plate geometry. Additionally, the barbed plate creates a stronger electric field throughout most of the inter-electrode space and therefore generates higher particle drift velocities. However, the barbed plate increases the magnitude of the electrically generated turbulence. Length scales are of the same order in the two geometries even though the electrode spacing of the barbed plate is double that of the wire-plate precipitator. From an electrical standpoint, the barbed plate design is superior to the wire-plate precipitator. The more uniform distribution of current and electric field coupled with higher levels of mixing suggest the barbed plate may be most suitable for use as a precharger in the entrance section of a parallel plate precipitator.Item Open Access Vertical sorting within dune structure(Colorado State University. Libraries, 1993) Abdel-Motaleb, Mohamed M. M., author; Gessler, Johannes, advisor; Molinas, Albert, committee member; Richardson, E. V., committee member; Zachmann, D. W., committee memberVertical sorting of sediment mixtures within dune structures was measured experimentally by conducting three types of experiments: running water experiments, still water experiments, and air experiments. Five different sediment mixtures with known initial gradations were used. The median grain diameters for the five sediment mixtures were between 0.35 mm and 0.86 mm, the geometric standard deviations of the same sediment mixtures were between 2.30 and 2.9. In the running water experiments, each experiment was continued until the dunes were fully developed down the flume. Then each dune was sampled along several horizontal layers. In the still water experiments, a delta shape was deposited, foreset by foreset, following one another in a continuous way. In the air experiments, the sand mixture was deposited as in the still water experiments. These experiments were to study the effect of the gravitational force on the vertical sorting process. The results of the running water experiments showed clearly demonstrated the vertical sorting process (vertical reduction in the sediment grain diameter) within the two-dimensional dunes. Also, the still water and air experiments showed the importance of the hydrodynamic force on the sorting process. A prediction equation relating the median grain diameter within the dune structure in the vertical direction to the back flow velocity on the lee face of the dune and the submerged weight of the sediment particle was used to calculate the vertical median grain diameter for the two-dimensional dune and compared with the measured data. Five dimensionless parameters were tested to correct the error in the predicted values. The dimensionless grain diameter gave the best result. Other sets of laboratory and field data for a point bar and three-dimensional dunes showed the sorting phenomenon. Calculated values for the vertical median grain diameter for the three-dimensional dune computed using the prediction equation agreed reasonably well with the observed values.Item Open Access Framework for evaluating water quality information system performance(Colorado State University. Libraries, 1994) Hotto, Harvey P., authorWater resource and water quality managers are being held increasingly accountable for the programs they manage. Much progress has been made in applying total systems perspectives to the design and operation of water quality monitoring and information programs, and towards rationalizing those programs with respect to management objectives and information needs. A recent example of that progress is the development of data analysis protocols to enhance the information system design process. However, further work is necessary to develop approaches which can help managers confront the water quality management environment of the future, which will be characterized by: (1) fewer purely technical questions, (2) more complex problems with social, economic, political and legal ramifications, and (3) actively managed and continuously improved water quality information systems. This research concludes that the management of water quality information systems for continuous improvement requires: (1) a competent system design process, (2) comprehensive documentation of system design and operation, and (3) a routine and thorough performance measurement and evaluation process. The framework for evaluating water quality information system performance presented in this dissertation integrates the experience of several disciplines into an instrument to help water quality managers accomplish these requirements. The framework embodies four phases: (1) evaluation planning, (2) watershed and management system analyses, (3) information system analysis, and (4) information system performance evaluation. The application of the framework is demonstrated in the evaluation of water quality monitoring programs associated with a unique municipal water transfer project. Water quality professionals of the U.S. Environmental Protection Agency and the U.S. Geological Survey are surveyed as to its potential application to large (e.g., regional or national) systems. Those exercises indicate the framework to be a convenient, economic, and flexible instrument useful towards enhancing water quality information system performance. Recommendations for future research to refine the framework and to extend its scope and utility are also presented.