2020-
Permanent URI for this collectionhttps://hdl.handle.net/10217/182111
Browse
Browsing 2020- by Issue Date
Now showing 1 - 20 of 2736
- Results Per Page
- Sort Options
Item Open Access Causal inference using observational data - case studies in climate science(Colorado State University. Libraries, 2020) Samarasinghe, Savini M., author; Ebert-Uphoff, Imme, advisor; Anderson, Chuck, committee member; Chong, Edwin, committee member; Kirby, Michael, committee memberWe are in an era where atmospheric science is data-rich in both observations (e.g., satellite/ sensor data) and model output. Our goal with causal discovery is to apply suitable data science approaches to climate data to make inferences about the cause-effect relationships between climate variables. In this research, we focus on using observational studies, an approach that does not rely on controlled experiments, to infer cause-effect. Due to reasons such as latent variables, these observational studies do not allow us to prove causal relationships. Nevertheless, they provide data-driven hypotheses of the interactions, which can enable us to get insights into the salient interactions as well as the timescales at which they occur. Even though there are many different causal inference frameworks and methods that rely on observational studies, these approaches have not found widespread use within the climate or Earth science communities. To date, the most commonly used observational approaches include lagged correlation/regression analysis, as well as the bivariate Granger causality approach. We can attribute this lack of popularity to two main reasons. First is the inherent difficulty of inferring cause-effect in climate. Complex processes in the climate interact with each other at varying time spans. These interactions can be nonlinear, the distributions of relevant climate variables can be non-Gaussian, and the processes can be chaotic. A researcher interested in these causal inference problems has to face many challenges varying from identifying suitable variables, data, preprocessing and inference methods, as well as setting up the inference problem in a physically meaningful way. Also, the limited exposure and accessibility to modern causal inference approaches is another reason for their limited use within the climate science community. In this dissertation, we present three case studies related to causal inference in climate science, namely, (1) causal relationships between the Arctic temperature and mid-latitude circulations, (2) relationships between the Madden Julian Oscillation (MJO) and the North Atlantic Oscillation (NAO) and (3) the causal relationships between atmospheric disturbances of different spatial scales (e.g., Planetary vs. Synoptic). We use methods based on probabilistic graphical models to infer cause-effect, specifically constraint-based structure learning methods, and graphical Granger methods. For each case study, we analyze and document the scientific thought process of setting up the problem, the challenges faced, and how we have dealt with the challenges. The challenges discussed include, but not limited to, method selection, variable representation, and data preparation. We also present a successful high-dimensional study of causal discovery in spectral space. The main objectives of this research are to make causal inference methods more accessible to a researcher/climate scientist who is at entry-level to spatiotemporal causality and to promote more modern causal inference methods to the climate science community. The case studies, covering a wide range of questions and challenges, are meant to act as a resourceful starting point to a researcher interested in tackling more general causal inference problems in climate.Item Open Access Green communication and security in wireless networks based on Markov decision process and semivariance optimization(Colorado State University. Libraries, 2020) Elsherif, Fateh, author; Chong, Edwin K. P., advisor; Jayasumana, Anura P., committee member; Luo, J. Rockey, committee member; Atadero, Rebecca, committee memberWireless networking has become an integral part of our everyday life. Certainly, wireless technologies have improved many aspects of the way people communicate, interact, and perform tasks, in addition to enabling new use cases, such as massive machine-type communications and industry verticals, among others. While convenient, these technologies impose new challenges and introduce new design problems. In this dissertation, we consider three problems in wireless networking. Specifically, we formulate optimization problems in green communication and security, and develop computationally efficient solutions to these optimization problems. First, we study the problem of base station (BS) dynamic switching for energy efficient design of fifth generation (5G) cellular networks and beyond. We formulate this problem as a Markov decision process (MDP) and use an approximation method known as policy rollout to solve it. This method employs Monte Carlo sampling to approximate the Q-value. In this work, we introduce a novel approach to design an energy-efficient algorithm based on MDP to control the ON/OFF switching of BSs; we exploit user mobility and location information in the selection of the optimal control actions. We start our formulation with the simple case of one-user one-ON. We then gradually and systematically extend this formulation to the multi-user multi-ON scenario. Simulation results show the potential of our novel approach of exploiting user mobility information within the MDP framework to achieve significant energy savings while providing quality-of-service guarantees. Second, we study the problem of jamming-aware-multi-path routing in wireless networks. Multipath routing is a technique for transmitting data from one or more source node(s) to one or more destination node(s) over multiple routing paths. We study the problem of wireless jamming-mitigation multipath routing. To address this problem, we propose a new framework for mitigating jamming risk based on semivariance optimization. Semivariance is a mathematical quantity used originally in finance and economics to measure the dispersion of a portfolio return below a risk-aversion benchmark. We map the problem of jamming-mitigation multipath routing to that of portfolio selection within the semivariance risk framework. Then we use this framework to design a new, and computationally feasible, RF-jamming mitigation algorithm. We use simulation to study the properties of our method and demonstrate its efficacy relative to a competing scheme that optimizes the jamming risk in terms of variance instead of semivariance. To the best of our knowledge, our work is the first to use semivariance as a measure of jamming risk. Directly optimizing objective functions that involve exact semivariance introduces certain computational issues. However, there are approximations to the semivariance that overcome these issues. We study semivariance problems—from the literature of finance and economics—and survey their solutions. Based on one of these solutions, we develop an efficient algorithm for solving semivariance optimization problems. Efficiency is imperative for many telecommunication applications such as tactile Internet and Internet of Things (recall that these types of applications have stringent constraints on latency and computing power). Our algorithm provides a general approach to solving semivariance optimization problems, and can be used in other applications. Last, we consider the problem of multiple--radio-access technology (multi-RAT) connectivity in heterogeneous networks (HetNets). Recently, multi-RAT connectivity has received significant attention—both from industry and academia—because of its potential as a method to increase throughput, to enhance communication reliability, and to minimize communication latency. We introduce a new approach to the problem of multi-RAT traffic allocation in HetNets. We propose a new risk-averse multi-RAT connectivity (RAM) algorithm. Our RAM algorithm allows trading off expected throughput for risk measured in throughput semivariance. Here we also adopt semivariance as a measure of throughput dispersion below a risk-aversion--throughput benchmark. We then formulate the multi-RAT connectivity problem as a semivariance-optimization problem. However, we tackle a different optimization problem in this part of the research. The objective function of the optimization problem considered here is different from the objective function of the optimization problem above that also uses semivariance to quantify risk (because the underlying standard form of portfolio selection is different). In addition, the set of constraints is different in this optimization problem: We introduce new capacity constraints to account for the stochastic capacity of the involved wireless links. We also introduce a new performance metric, the risk-adjusted throughput; risk-adjusted throughput is the ratio between the expected throughput and the throughput semideviation, where semideviation is the square root of semivariance. We evaluate the performance of our algorithm through simulation of a system with three radio-access technologies: 4G LTE, 5G NR, and WiFi. Simulation results show the potential gains of using our algorithm.Item Open Access Immune modulatory and antimicrobial properties of mesenchymal stromal cells delivered systemically(Colorado State University. Libraries, 2020) Johnson, Valerie, author; Dow, Steve, advisor; Zabel, Mark, advisor; Avery, Anne, committee member; Tjalkens, Ron, committee memberTo view the abstract, please see the full text of the document.Item Open Access Evaluation of GENE-UP and TEMPO AC for determination of Shiga-Toxin producing Escherichia coli and total aerobic microbial populations from MicroTally sheets used to sample beef carcasses and hides(Colorado State University. Libraries, 2020) Liu, Tianqing, author; Belk, Keith E., advisor; Yang, Hua, advisor; Weir, Tiffany L., committee member; Zagmutt, Francisco J., committee memberTwo studies were conducted to evaluate GENE-UP and TEMPO AC (bioMerieux, Marcy-l'Étoile, France) for determination of Shiga-Toxin producing Escherichia coli and total aerobic microbial populations from MicroTally Sheets (Fremonta Corporation, Fremont, CA) used to sample beef carcasses and hides. The first study was conducted to evaluate the automated TEMPO® AC Test in comparison with traditional direct agar plating method for enumeration of aerobic mesophilic flora in MicroTally sheets used to sample beef carcasses and hides. A total of 160 MicroTally (MT) sheet samples were collected from commercial beef processing plants by swab-sampling on the surface of naturally contaminated pre-evisceration carcasses, hides and post-chill final carcasses, and analyzed within 24 h after sample collection. Of these, all 160 samples were within detection limit and analyzed by both automated TEMPO AC test and a traditional direct agar plating method. For these results, the aerobic count correlation coefficient was high (0.93) for pre-evisceration carcasses, which had mean (± standard deviation) counts of 3.3 ± 0.9 and 3.1 ± 0.8 log CFU/mL for those two methods, respectively. The aerobic count correlation coefficients were higher (0.95 and 0.96) for MT samples from hides and post-chill final carcasses, which had mean (± standard deviation) counts of 5.3 ± 1.2 and 5.0 ± 1.2, 3.0 ± 1.4 and 3.0 ± 1.3 log CFU/mL for those two methods, respectively. Overall, 98.8% of aerobic count results were within 1.0-log difference between the two enumeration methods. The correlation coefficient (r = 0.97) and linearity regression (log TEMPO MPN/mL = 1.06 x log PCA-CFU/mL +0.03) between the two methods was calculated for our whole sample set (n = 160). Our results demonstrated that the automated MPN method-TEMPO AC Test generated total aerobic mesophilic microflora counts that were highly correlated and consistent with the counts obtained by traditional plating methods on enumerating total aerobic mesophilic microbial populations recovered from MicroTally sheets. Use of TEMPO AC test for MicroTally sheet analysis could save time and labor for the meat industry as it conducts microbial analyses. The second study was conducted to determine the specificity of bioMérieux's GENE-UP, a PCR-based molecular diagnostic system, to detect Shiga Toxin-producing Escherichia coli (STEC) from samples collected from beef processing plants using MicroTally sheets with the manual sampling device method. A total of 194 MicroTally (MT) samples were collected from beef processing plants and analyzed for determination of the top 6 STEC and E. coli O157: H7 (top 7 STEC) using the GENE-UP system, BioRad commercial kits and BioControl GDS kits. Fifty MT samples were collected from swabbing pre-evisceration carcasses and inoculated with hide-derived inocula, while the remaining 144 MT samples were obtained from post-chill final carcasses in sales coolers and inoculated with E. coli strains. All inoculated MT samples were enriched for 8-hour and 10-hour at 42ᵒC in buffered peptone water (BPW) and re-collected after incubation. Eight-hour and 10-hour enrichment samples were analyzed using the GENE-UP system at Colorado State University and sent to U.S Meat Animal Research Center (USMARC, Clay Center, NE) for detection of top 6 STEC and E. coli O157: H7. The GENE-UP system uses EH1 assay to detect stx and eae genes, ECO assay to detect genes specific to O157:H7 serogroup, and EH2 assay to differentiate top 6 serogroups. These virulence genes including Shiga-toxin gene (stx), intimin-encoding eae gene and genes specific to top 7 serogroups are highly related to pathogenic STEC. The NM-EHEC assay targeting virulence genes espK, espV and CRISPR_O26E does not directly differentiate the top 7 STEC, but serves as additional screening test to help identify presence of any of the top 7 STEC. All potential positive samples determined by PCR screening were plated onto selective agar for culture confirmation. After the immunoconcentration step, isolates picked from selective agar were subjected to additional PCR screening. BioRad and BioControl GDS PCR screening methods were used following their standard protocols for determination of top 7 STEC at USMARC. Presumptive positive samples confirmed by the additional PCR test were designated as "true positives." Presumptive positive samples that were not confirmed by the additional PCR test were designated as "regulatory false positives." Overall, our results indicated that the GENE-UP system worked well in the detection of the top 7 STEC recovered from the MicroTally sheets. In order to reduce or eliminate false negative results, a 10-h enrichment time in BPW was required for detection of both the top 6 STEC and E. coli O157:H7. Compared to GENE-UP and GDS, BioRad generated a much higher number of potential positives that required cultural confirmation. Moreover, use of the NM-EHEC kit targeting virulence genes (espK, espV and CRISPR_O26E), as an additional PCR screening after EH1 PCR (stx and eae), has potential to reduce the number of samples that require further O-type determination. However, the GENE-UP E. coli O157:H7 detection system needs to reduce rates of false negative results caused by the shift of Tm when E. coli O157:H7 and O157: non-H7 co-exist in a sample.Item Open Access Characterization of the selective hydrolysis of branched ubiquitin chains by Uch37 and its activator Rpn13(Colorado State University. Libraries, 2020) Hazlett, Zachary S., author; Yao, Tingting, advisor; Cohen, Robert, committee member; Peersen, Olve, committee member; Di Pietro, Santiago, committee member; Kennan, Alan, committee memberThe ubiquitin (Ub) C-terminal hydrolase, Uch37, can be found associated with the 26S proteasome as well as the INO80 chromatin remodeling complex. Bound to the 26S proteasome, it assists in regulating the degradation of Ub modified proteins. The proteasomal subunit Rpn13 binds Uch37, anchors it to the proteasome 19S regulatory particle and enhances the deubiquitinating enzyme's (DUB's) catalytic activity. While the structure of the Uch37/Rpn13 complex bound to a single Ub molecule has been characterized, much still remains unknown regarding the enzyme's substrate specificity, the molecular basis for its substrate specificity, and its function in the regulation of proteasomal degradation. In this thesis we characterize the substrate specificity of Uch37 with and without its proteasomal binding partner Rpn13. By synthesizing poly-Ub chains of various linkage types and topologies and using these Ub chains in in vitro deubiquitination assays, we were able to determine that Uch37/Rpn13 selectively cleaves branched Ub chains. This provides evidence to suggest that Uch37 is the first enzyme with activity specific for branched Ub chains. Branched Ub chains have been identified endogenously and have roles connected to the regulation of nascent misfolded polypeptides, cell cycle control, and the enhancement of proteasomal degradation. The work presented here sets out to characterize the molecular mechanism of branched chain hydrolysis by Uch37 and its binding partner Rpn13, determine the kinetics of this enzymatic reaction, and establish a system for probing the function of "debranching" by Uch37 in proteasomal degradation. The conclusion of our work builds our understanding of the complex system of intracellular signaling by Ub and unveils key elements to the primary system responsible for regulating cellular protein homeostasis.Item Open Access Analytical spectroscopy method development to study mechanisms of Alzheimer's and tuberculosis diseases(Colorado State University. Libraries, 2020) Beuning, Cheryle Nicole, author; Crans, Debbie C., advisor; Levinger, Nancy E., committee member; Barisas, George, committee member; Fisher, Ellen R., committee member; Zabel, Mark, committee memberThis dissertation covers the analytical method development created to study and enhance the knowledge of two specific disease mechanisms important to Alzheimer's disease and Mycobacterium tuberculosis. There are two parts in this dissertation where Part 1 is entitled Measurement of The Kinetic Rate Constants of Interpeptidic Divalent Transition Metal Ion Exchange in Neurodegenerative Disease. Part 2 is entitled The Electrochemistry of Truncated Menaquinone Electron Transporters with Saturated Isoprene Side Chains Important in Tuberculosis. These diseases appear on the World Health Organization's top 10 leading causes of death worldwide. The amyloid-beta (Aβ) peptides are associated with Alzheimer's disease, where neurodegeneration is caused by the aggregation of the peptide into senile plaques within neuronal synaptic cleft spaces. Alzheimer's disease currently has no cure due to its multi-causative pathology. One disease mechanism is the coordination of divalent metal ions to the peptide. Specifically, Aβ coordinates Cu(II) and Zn(II) ions that can enhance the aggregation of Aβ into plaques. These metal ions are highly regulated within the human body and are usually found bound to peptides and not as free ions. Therefore, the Aβ must sequester the metals from other proteins and peptides. The primary research in this dissertation advances fluorescence method development to measure interpeptidic Cu(II) exchange kinetics to be able to measure this type of disease mechanism. The small peptides GHK (Gly – His – Lys) and DAHK (Asp – Ala – His – Lys) both chelate Cu(II) with nM affinity, have biological relevance as they are motifs found in human blood like Aβ, and chelate Cu(II) with similar nitrogen-rich binding ligands as Aβ. By substituting non-coordinating lysine residues with fluorescent tryptophan, the interpeptidic exchange rates can be measured since tryptophan fluorescence is statically quenched when within 14 angstroms of a paramagnetic bound Cu(II). Thus Cu(II) transfer from Cu(H-1GHW) to either GHK or DAHK can be monitored by recovered GHW fluorescence as the Cu(II) is exchanged and second-order kinetic rate constants were determined. This methodology was then used to monitor the Cu(II) exchange from truncated Cu(Aβ1-16) and Cu(Aβ1-28) complexes to GHW and DAHW, where second-order reaction kinetic rate constants were determined. While in the exchanges between Cu(H-1GHW) with GHK/DAHK the second-order rate constants were on the magnitude of 102 or 101 M-1s-1, respectively, the exchanges from Cu(Aβ) complexes were 2-3 orders of magnitude larger, 104 M-1s-1 (to GHW and DAHW). These differences in rate constant magnitude arise from the fact that the affinity of GHW (KA = 1013 M-1) for Cu(II) is larger than Aβ (KA =1010 M-1). This method development is an important step to an accurate measurement of the interpeptidic exchange between Aβ peptides, including in their fibril and plaque formations. Since senile plaques are found in synaptic cleft spaces with nanometer distances between neurons, a model system was generated to study coordination reactions at the nanoscale. In order to do this, the metal ion would need to be released in a controlled manner. Studies of metal ion burst reactions through the use of photocages can simulate bursts of ions like those seen in the synaptic cleft. Zn(II) is often released in its ionic form within the synapse in its function as a neurotransmitter, so we simulated a burst of Zn(II) ions by using a photocage, [Zn(NTAdeCage)]- which releases Zn(II) when irradiated with light. The photocage was irradiated to release Zn(II) then we followed its reaction progress with an in situ chelator, Zincon, in reverse micelles and in bulk aqueous buffer. The coordination reaction was 1.4 times faster in an aqueous buffer than in reverse micelles, despite the Zn(II) and Zincon being closer in the nanoparticle. These observations suggested that there is an impact on coordination reactivity within a highly heterogeneous environment with a cell-like membrane, which is due to the partitioning of each ligand. We observe that the photocage stays in the water pool of the reverse micelle and the Zincon partitions into the membrane interface. Thus, the coordination reactivity is diminished, likely due to the need for Zn(II) to diffuse to the Zincon, crossing a highly organized Stern layer to encounter the Zincon. Whereas in aqueous buffer, these are free to encounter each other despite being hundreds of nanometers apart. These proof of concept studies are integral to studying initial binding dynamics of metal ions with peptides at the nanoscale present in cells and neuronal synapses. Tuberculosis is a pathogenic bacterium which despite having a curable medication, can be drug-resistant. Menaquinone (MK) analogs with regiospecific partial saturation in their isoprenyl side chain, such as MK-9(II-H2), are found in many types of bacteria, including pathogenic Mycobacterium tuberculosis and function as electron transport lipids cycling between quinone and quinol forms within the electron transport system. While the function of MK is well established, the role of regiospecific partial saturation in the isoprenyl side chain on MK remains unclear and may be related to the redox function. Recently, an enzyme in M. tuberculosis called MenJ was shown to selectively saturate the second isoprene unit of MK-9 to MK-9(II-H2). The knockout expression of this enzyme was shown to be essential to the survival of the bacterium. A series of synthesized truncated MK-n analogs were investigated using a systematic statistical approach to test the effects of regiospecific saturation on the redox potentials. Using principal component analysis on the experimental redox potentials, the effects of saturation of the isoprene tail on the redox potentials were identified. The partial saturation of the second isoprene unit resulted in more positive redox potentials, requiring less energy to reduce the quinone. While full saturation of the isoprene tail resulted in the most negative potentials measured, requiring more energy to reduce the quinone. These observations give insight into why these partially saturated menaquinones are conserved in nature.Item Open Access The dynamic nature of ligand layers on gold nanoclusters(Colorado State University. Libraries, 2020) Hosier, Christopher Allen, author; Ackerson, Christopher J., advisor; Kennan, Alan J., committee member; Henry, Chuck, committee member; Kipper, Matthew, committee memberGold nanoclusters have been heavily investigated over the last few decades for their potential use in sensing, imaging, energy conversion, and catalytic applications. The development of methodology that allows for controlled functionalization of the surface ligand layer in these compounds is of particular interest due to the role of ligands in determining a large number of cluster properties. One of the fundamental ways of tailoring the ligand layer is the use of ligand exchange reactions. Despite the synthetic utility that ligand exchange reactions afford, a significant number of unanswered challenges currently limits the scope and control that can be obtained with these reactions. While a large variety of ligand types have been used to protect nanocluster surfaces, the majority of reported ligand exchange reactions revolve around chalcogenate-for-chalcogenate exchange. Site-selectivity in these reactions is limited to kinetic phenomenon, and the role of intercluster exchange largely remains a mystery. Additionally, recent works suggest that changes in ligand orientation can impact bulk material properties. In this thesis, we seek to address these challenges by reporting new exchange methodology, probing the evolution of exchanged ligand layers over time, investigating the stability of ligand layers in reaction conditions, and exploring the impact of ligand orientation on nanocluster behavior and reactivity. By addressing these questions and challenges, we seek to move closer to the goal of developing methodology that can be easily and reliably used to tailor gold nanoclusters for directed applications.Item Open Access Examining barriers that predict mindfulness uptake in parents of children with autism spectrum disorder(Colorado State University. Libraries, 2020) Castells, Kara, author; Hepburn, Susan, advisor; Coatsworth, Doug, committee member; Brown, Samantha, committee memberThis study aimed to investigate barriers to mindfulness practice in parents of children with Autism Spectrum Disorder (ASD). I hypothesized that I could reliably measure three barriers to mindfulness that parents could rate themselves on statements reflecting these barriers. I also hypothesized that the barriers to mindfulness vary as a function of parent characteristics (e.g., overall experience with mindfulness, trait mindfulness, level of mindfulness experience) and child characteristics (e.g., severity of ASD symptoms) and that parents in this population are less likely to use mindfulness to reduce parent stress due to the perceived barriers, (1) misconceptions about mindfulness, (2) beliefs that parenting stress is not relevant to child outcomes, and (3) lack of time parents allocate to focus on their own well-being. The study surveyed 91 parents of children with ASD using a demographics questionnaire, the Mindfulness Barriers Scale (MBS), created by the research team, and the Mindful Attention and Awareness Scale. Preliminary analysis of the measure was conducted, followed by a series of independent sample t-tests, an ANOVA, and regression analysis to test the hypotheses. Examination of the MBS showed that each subscale was distinct in what they measured and showed acceptable reliability. Results showed that misconceptions, time, and disinterest in mindfulness, a single-item variable found as conceptually interesting in the preliminary analysis, were predictors of mindfulness uptake. Significant differences were found between the levels of mindfulness experience and misconceptions about mindfulness, parents with neutral or negative overall experience with mindfulness reported time as a greater barrier and higher misconceptions than parents with positive overall experience, and parents with low trait mindfulness reported time as a greater barrier than parents with high trait mindfulness. The significance of the findings, limitations, and future directions are discussed.Item Open Access Bio-inspired design for engineering applications: empirical and finite element studies of biomechanically adapted porous bone architectures(Colorado State University. Libraries, 2020) Aguirre, Trevor Gabriel, author; Donahue, Seth W., advisor; Ma, Kaka, committee member; Heyliger, Paul, committee member; Simske, Steven, committee memberTrabecular bone is a porous, lightweight material structure found in the bones of mammals, birds, and reptiles. Trabecular bone continually remodels itself to maintain lightweight, mechanical competence, and to repair accumulated damage. The remodeling process can adjust trabecular bone architecture to meet the changing mechanical demands of a bone due to changes in physical activity such as running, walking, etc. It has previously been suggested that bone adapted to extreme mechanical environments, with unique trabecular architectures, could have implications for various bioinspired engineering applications. The present study investigated porous bone architecture for two examples of extreme mechanical loading. Dinosaurs were exceptionally large animals whose body mass placed massive gravitational loads on their skeleton. Previous studies investigated dinosaurian bone strength and biomechanics, but the relationships between dinosaurian trabecular bone architecture and mechanical behavior has not been studied. In this study, trabecular bone samples from the distal femur and proximal tibia of dinosaurs ranging in body mass from 23-8,000 kg were investigated. The trabecular architecture was quantified from micro-computed tomography scans and allometric scaling relationships were used to determine how the trabecular bone architectural indices changed with body mass. Trabecular bone mechanical behavior was investigated by finite element modeling. It was found that dinosaurian trabecular bone volume fraction is positively correlated with body mass like what is observed for extant mammalian species, while trabecular spacing, number, and connectivity density in dinosaurs is negatively correlated with body mass, exhibiting opposite behavior from extant mammals. Furthermore, it was found that trabecular bone apparent modulus is positively correlated with body mass in dinosaurian species, while no correlation was observed for mammalian species. Additionally, trabecular bone tensile and compressive principal strains were not correlated with body mass in mammalian or dinosaurian species. Trabecular bone apparent modulus was positively correlated with trabecular spacing in mammals and positively correlated with connectivity density in dinosaurs, but these differential architectural effects on trabecular bone apparent modulus limit average trabecular bone tissue strains to below 3,000 microstrain for estimated high levels of physiological loading in both mammals and dinosaurs. Rocky Mountain bighorn sheep rams (Ovis canadensis canadensis) routinely conduct intraspecific combat where high energy cranial impacts are experienced. Previous studies have estimated cranial impact forces up to 3,400 N and yet the rams observationally experience no long-term damage. Prior finite element studies of bighorn sheep ramming have shown that the horn reduces brain cavity translational accelerations and the bony horncore stores 3x more strain energy than the horn during impact. These previous findings have yet to be applied to applications where impact force reduction is needed, such as helmets and athletic footwear. In this study, the velar architecture was mimicked and tested to determine suitability as novel material architecture for running shoe midsoles. It was found that velar bone mimics reduce impact force (p < 0.001) and higher energy storage during impact (p < 0.001) and compression (p < 0.001) as compared to traditional midsole architectures. Furthermore, a quadratic relationship (p < 0.001) was discovered between impact force and stiffness in the velar bone mimics. These findings have implications for the design of novel material architectures with optimal stiffness for minimizing impact force.Item Open Access Assessing usability of full-body immersion in an interactive virtual reality environment(Colorado State University. Libraries, 2020) Raikwar, Aditya R., author; Ortega, Francisco R., advisor; Beveridge, Ross, committee member; Stephens, Jaclyn, committee member; Smith, Charles, committee memberImproving immersion and playability has a direct impact on the effectiveness of certain Virtual Reality applications. This project looks at understanding how to develop an immersive soccer application with the intention to measure skills, particularly for the use of assessment and health promotion. This project will show the requirements to create a top-down immersive experience with commodity devices. The particular system serves the simulation of a soccer training environment to evade opponents, pass to teammates, and score goals with the objective of measuring the difficulty of single, double, and triple tasks. It is expected that the performance will go down as the level of tasks increases. This hypothesis is extremely relevant as it provides a system that could serve as an assessment tool for people with concussions to return to play (with an OK by a physician) or to promote exercise to non-athletes. This thesis provides all the necessary steps to explain the high-level details of highly immersive applications while providing a future-path for human-subject experiments.Item Open Access Quantifying internal climate variability and its changes using large-ensembles of climate change simulations(Colorado State University. Libraries, 2020) Li, Jingyuan, author; Thompson, David W. J., advisor; Barnes, Elizabeth A., committee member; Ravishankara, A. R., committee member; Cooley, Daniel, committee memberIncreasing temperatures over the last 50 years have led to a multitude of studies on observed and future impacts on surface climate. However, any changes on the mean need to be placed in the context of its variability to be understood and quantified. This allows us to: 1) understand the relative impact of the mean change on the subsequent environment, and 2) detect and attribute the external change from the underlying "noise" of internal variability. One way to quantify internal variability is through the use of large ensemble models. Each ensemble member is run on the same model and with the same external forcings, but with slight differences in the initial conditions. Differences between ensemble members are due solely to internal variability. This research exploits one such large ensemble of climate change simulations (CESM-LE) to better understand and evaluate surface temperature variability and its effects under external forcing. One large contribution to monthly and annual surface temperature variability is the atmospheric circulation, especially in the extratropics. Dynamical adjustment seeks to determine and remove the effects of circulation on temperature variability in order to narrow the range of uncertainty in the temperature response. The first part of this work compares several commonly used dynamical adjustment methods in both a pre-industrial control run and the CESM-LE. Because there are no external forcings in the control run, it is used to provide a quantitative metric by which the methods are evaluated. We compare and assess these dynamical adjustment methods on the basis of 2 attributes: 1) the method should remove a maximum amount of internal variability while 2) preserving the true forced signal. While the control run is excellent for assessing the methods in an "ideal" environment, results from the CESM-LE show biases in the dynamically-adjusted trends due to a forced response in the circulation fields themselves. This work provides a template from which to assess the various dynamical adjustment methods available to the community. A less studied question is how internal variability itself will respond to climate change. Past studies have found regional changes in surface temperature variance and skewness. This research also investigates the impacts of climate change on day-to-day persistence of surface temperature. Results from the CESM-LE suggest that external warming generally increases surface temperature persistence, with the largest changes over the Arctic and ocean regions. The results are robust and distinct from internal variability. We suggest that persistence changes are mostly due to an increase in the optical thickness of the atmosphere due to increases in both carbon dioxide and water vapor. This increased optical thickness reduces the thermal damping of surface temperatures, increasing their persistence. Model results from idealized aquaplanet simulations with different radiation schemes support this hypothesis. The results thus reflect a robust thermodynamic and radiative constraint on surface temperature variability.Item Open Access Implicit solvation using the superposition approximation applied to many-atom solvents with static geometry and electrostatic dipole(Colorado State University. Libraries, 2020) Mattson, Max Atticus, author; Krummel, Amber T., advisor; McCullagh, Martin, advisor; Szamel, Grzegorz, committee member; Prieto, Amy, committee member; Krueger, David, committee memberLarge-scale molecular aggregation of organic molecules, such as perylene diimides, is a phenomenon that continues to generate interest in the field of solar light-harvesting. Functionalization of the molecules can lead to different aggregate structures which in turn alter the spectroscopic properties of the molecules. To improve the next generation of perylene diimide solar cells a detailed understanding of their aggregation is necessary. A critical aid in understanding the spectroscopic properties of large-scale aggregating systems is molecular simulation. Thus development of an efficient and accurate method for simulating large-scale aggregating systems at dilute concentrations is imperative. The Implicit Solvation Using the Superposition Approximation model (IS-SPA) was originally developed to efficiently model nonpolar solvent–solute interactions for chargeless solutes in TIP3P water, improving the efficiency of dilute molecular simulations by two orders of magnitude. In the work presented here, IS-SPA is developed for charged solutes in chloroform solvent. Chloroform is the first solvent model developed for IS-SPA that is composed of more than one Lennard-Jones potential. Solvent distribution and force histograms were measured from all-atom explicit-solvent molecular dynamics simulations, instead of using analytic functions, and tested for Lennard-Jones sphere solutes of various sizes. The level of detail employed in describing the 3-dimensional structure of chloroform is tested by approximating chloroform as an ellipsoid, spheroid, and sphere by using 3-, 2-, and 1-dimensional distribution and force histograms respectively. A perylene diimide derivative, lumogen orange, was studied for its unfamiliar aggregation mechanism in chloroform and tetrahydrofuran solvents via Fourier-transform infrared and 2dimensional infrared spectroscopies as well as all-atom explicit-solvent molecular dynamics simulations and quantum mechanical frequency calculations. Molecular simulations identified two categories of likely aggregate dimer structures: the expected -stack structure, and a less familiar edge-sharing structure where the most highly charged atoms of the perylene diimide core are strongly interacting. Quantum mechanical vibrational frequency calculations were performed for various likely dimer aggregate structures identified in molecular simulation and compared to experimental spectroscopic results. The experimental spectra of the aggregating system share qualities with the edge-sharing dimer frequency calculations however larger aggregate structures should be tested. A violanthrone derivative, violanthrone-79 (V-79), was studied for its differing aggregation mechanisms in chloroform and tetrahydrofuran solvents via Fourier-transform infrared and 2dimensional infrared spectroscopies as well as all-atom explicit-solvent molecular dynamics simulations and quantum mechanical frequency calculations. The -stacking aggregate structure of V-79 is supported by all methods used, however, the type of -stacking orientations are different between the two solvents. Chloroform supports parallel -stacked aggregates while tetrahydrofuran supports anti-parallel -stacked aggregates which show differing vibrational energy delocalization between the aggregated molecules. The publications in chapters 3 and 4 demonstrate the power of combining experimental spectroscopy and computational methods like molecular dynamics simulations and quantum mechanical frequency calculations, however, they also show how having larger simulations with multiple solute molecules are needed. This is why developing IS-SPA to be used for these simulations is necessary. Further developments to IS-SPA are discussed regarding the importance of various symmetries of chloroform and the subsequent dimensionalities of the histograms used to describe its distribution and Lennard-Jones force. Two methods for describing the Coulombic forces of chloroform solvation are discussed and tested on oppositely charged Lennard-Jones sphere solutes. The radially symmetric treatment fails to capture the Coulombic forces of the spherical solute system from all-atom explicit-solvent molecular dynamics simulations. A dipole polarization treatment is presented and tested for the charged spherical solute system which better captures the Coulombic forces measured from all-atom explicit-solvent molecular dynamics simulations. Additional considerations for the improvement of IS-SPA and the developments in this work are presented. The dipole polarization approximation outlined in chapter 5 assumes that each chloroform is a static dipole, allowing the dipole magnitude to fluctuate as well as polarize is a more physically rigorous approximation that will likely improve the accuracy of Coulombic forces in IS-SPA. A novel method, drawn from the knowledge gained studying chloroform, for the efficient modeling of new solvent types including flexible solvent molecules in IS-SPA is discussed.Item Open Access Spatiotemporal variations in liquid water content in a seasonal snowpack: implications for radar remote sensing(Colorado State University. Libraries, 2020) Bonnell, Randall Ray, author; McGrath, Daniel, advisor; Fassnacht, Steven, committee member; Rasmussen, Kristen, committee memberMountain snowpacks act as seasonal reservoirs, providing a critical water resource to ~1.2 billion people globally. Regions with persistent snowpacks (e.g., mountain and polar environments) are responding quickly to climate change and are warming at faster rates than low-elevation temperate and equatorial regions. Since 1915, snow water equivalent (SWE) in the western U.S. snowpack has declined by 21% and snow covered area is contracting in the Rocky Mountains. Despite the clear importance of this resource and the identification of changes affecting it, no current remote sensing approach can accurately measure SWE at high spactiotemporal resolution. L-band (1-2 GHz) Interferometric Synthetic Aperture Radar (InSAR) is a promising approach for detecting changes in SWE at high spatiotemporal resolution in complex topography, but there are uncertainties regarding its performance, particularly when liquid water content (LWC) is present in the snowpack. LWC exhibits high spatial variability, causing spatially varying radar velocity that introduces significant uncertainty in SWE-retrievals. The objectives of this thesis include: (1) examine the importance of slope, aspect, canopy cover, and air temperature in the development of LWC in a continental seasonal snowpack using 1 GHz ground-penetrating radar (GPR), a proxy for L-band InSAR, and (2) quantify the uncertainty in L-band radar SWE-retrievals in wet-snow. This research was performed at Cameron Pass, a high elevation pass (3120 m) located in north-central Colorado, over the course of multiple survey dates during the melt season of 2019. Transects were chosen which represent a range in slope, aspect and canopy cover. Slope and aspect were simplified using the northness index (NI). Canopy cover was quantified using the leaf area index (LAI). Positive degree days (PDD) was used to represent available melt-energy from air temperature. The spatiotemporal development of LWC was studied along the transects using GPR, probed depths, and snowpit measured density. A subset of this project substituted Terrestrial LiDAR Scans (TLS) for probed depths. Surveys (17 in total, up to 3 surveys per date) were performed on seven dates which began on5 April 2019, where LWC values were ~0 vol. %, and ended on 19 June 2019 where LWC values exceeded 10 vol. %. Point measurements of LWC were observed to change (ΔLWC) by +9 vol. % or -8 vol. % over the course of a single day, but median ΔLWC were ~0 vol. % or slightly negative. LAI was negatively correlated with LWC for 13 out of the 17 surveys. NI was negatively correlated with LWC for 10 out of the 17 surveys. Multi-variable linear regressions to estimate ΔLWC identified several statistically significant variables (p-value < 0.10): LAI, NI, ΔPDD, and NI x ΔPDD. Snow-on Terrestrial LiDAR Scans (TLS) were conducted twice during the melt season, and a snow-off scan was conducted in late summer. Snow-on scans were differenced from the snow-off scan to produce distributed snow depth maps. TLS-derived snow depths compared poorly with probe-derived depths, which is attributed to poor LiDAR penetration through the thick vegetation present during the snow-off scan. Finally, radar measurements of SWE (SWE-retrievals), if coupled with velocities derived from dry-snow densities, overestimated the mean SWE along transects by as much as 40% during the melt season, highlighting a potential issue for water managers during the melt season. Future work to support the testing of L-band radar SWE-retrievals in wet-snow should test radar signal-power attenuation methods and the capabilities of snow models for estimating LWC.Item Open Access Superhydrophobic titania nanoflowers for reducing adhesion of platelets and bacteria(Colorado State University. Libraries, 2020) Montgomerie, Zachary Z., author; Popat, Ketul C., advisor; Li, Vivian, committee member; Sampath, Walajabad S., committee memberThrombosis formation and bacterial infection are key challenges for blood-contacting medical devices. When blood components encounter a device's surface, proteins are adsorbed, followed by the adhesion and activation of platelets as well as an immune response. This culminates in clot formation via the trapping of red blood cells in a fibrin matrix, which can block the device's function and cause severe complications for the patient. Bacteria may also adhere to a device's surface. This can lead to the formation of a biofilm, a protective layer for bacteria that significantly increases resistance to antibiotics. Despite years of research, no long-term solutions have been discovered to combat these issues. To impede thrombosis, patients often take antiplatelet drugs for the life of their device, which can cause excess bleeding and other complications. Patients can take antibiotics to fight bacterial infection, but these are often ineffective if biofilms are formed. Superhydrophobic surfaces have recently been studied for their antiadhesive properties and show promise in reducing both thrombosis and bacterial infection. In this work, superhydrophobic titania nanoflower surfaces were successfully fabricated on a titanium alloy Ti-6Al-4V substrate and examined for both hemocompatibility and bacterial adhesion. The results indicated a reduction of protein adsorption, platelet and leukocyte adhesion and activation, whole blood clotting, bacterial adhesion, and biofilm formation, as well as surface stability compared to control surfaces.Item Open Access A qualitative analysis of the experience of being LGBTQ in graduate school(Colorado State University. Libraries, 2020) Sokolowski, Elizabeth, author; Chavez, Ernest, advisor; Rickard, Kathryn, committee member; Carlson, Laurie, committee member; Davalos, Deana, committee memberThe current study sought to understand LGBTQ campus climate for LGBTQ doctoral students. Narrative analysis was used during this exploratory study to identify "when" the three LGBTQ doctoral student participants had experiences related to their LGBTQ identities, including "what" was happening during those events and "how" it was happening. These experiences occurred during six events (i.e., applying to graduate programs, receiving letter of acceptance from graduate program, visiting weekend after receiving acceptance letter, choosing advisor or research lab, working as a graduate teaching assistance, and preparing for PhD candidacy exams) and four time periods (i.e., early general experiences in the graduate program, general graduate school experiences, general research lab experiences, and general social experiences during graduate school). This study also identified how these experiences supported or hindered LGBTQ doctoral student success. Overall, the results suggested that LGBTQ doctoral students expended substantial effort to manage the harmful components of campus climate, which were present across locations, times, and roles as a doctoral student. Finally, participants shared their own proposed changes to improve campus climate, and the primary researcher provided an overarching list of recommendations to improve LGBTQ campus climate for LGBTQ doctoral students.Item Open Access User-oriented mobility management in cellular wireless networks(Colorado State University. Libraries, 2020) Alsaeedy, Alaa A. R., author; Chong, Edwin, advisor; Morton, Jade, committee member; Luo, J. Rockey, committee member; Atadero, Rebecca, committee memberMobility Management (MM) in wireless mobile networks is a vital process to keep an individual User Equipment (UE) connected while moving within the network coverage area—this is required to keep the network informed about the UE's mobility (i.e., location changes). The network must identify the exact serving cell of a specific UE for the purpose of data-packet delivery. The two MM procedures that are necessary to localize a specific UE and deliver data packets to that UE are known as Tracking Area Update (TAU) and Paging, which are burdensome not only to the network resources but also UE's battery—the UE and network always initiate the TAU and Paging, respectively. These two procedures are used in current Long Term Evolution (LTE) and its next generation (5G) networks despite the drawback that it consumes bandwidth and energy. Because of potentially very high-volume traffic and increasing density of high-mobility UEs, the TAU/Paging procedure incurs significant costs in terms of the signaling overhead and the power consumption in the battery-limited UE. This problem will become even worse in 5G, which is expected to accommodate exceptional services, such as supporting mission-critical systems (close-to-zero latency) and extending battery lifetime (10 times longer). This dissertation examines and discusses a variety of solution schemes for both the TAU and Paging, emphasizing a new key design to accommodate 5G use cases. However, ongoing efforts are still developing new schemes to provide seamless connections to the ever-increasing density of high-mobility UEs. In this context and toward achieving 5G use cases, we propose a novel solution to solve the MM issues, named gNB-based UE Mobility Tracking (gNB-based UeMT). This solution has four features aligned with achieving 5G goals. First, the mobile UE will no longer trigger the TAU to report their location changes, giving much more power savings with no signaling overhead. Instead, second, the network elements, gNBs, take over the responsibility of Tracking and Locating these UE, giving always-known UE locations. Third, our Paging procedure is markedly improved over the conventional one, providing very fast UE reachability with no Paging messages being sent simultaneously. Fourth, our solution guarantees lightweight signaling overhead with very low Paging delay; our simulation studies show that it achieves about 92% reduction in the corresponding signaling overhead. To realize these four features, this solution adds no implementation complexity. Instead, it exploits the already existing LTE/5G communication protocols, functions, and measurement reports. Our gNB-based UeMT solution by design has the potential to deal with mission-critical applications. In this context, we introduce a new approach for mission-critical and public-safety communications. Our approach aims at emergency situations (e.g., natural disasters) in which the mobile wireless network becomes dysfunctional, partially or completely. Specifically, this approach is intended to provide swift network recovery for Search-and-Rescue Operations (SAROs) to search for survivors after large-scale disasters, which we call UE-based SAROs. These SAROs are based on the fact that increasingly almost everyone carries wireless mobile devices (UEs), which serve as human-based wireless sensors on the ground. Our UE-based SAROs are aimed at accounting for limited UE battery power while providing critical information to first responders, as follows: 1) generate immediate crisis maps for the disaster-impacted areas, 2) provide vital information about where the majority of survivors are clustered/crowded, and 3) prioritize the impacted areas to identify regions that urgently need communication coverage. UE-based SAROs offer first responders a vital tool to prioritize and manage SAROs efficiently and effectively in a timely manner.Item Open Access Examining listening skills of diplomatic French as foreign language learners: an angle for languages for specific purposes(Colorado State University. Libraries, 2020) Zecher, Eryth, author; Grim, Frédérique, advisor; Nekrasova-Beker, Tatiana, advisor; Becker, Anthony, committee member; Brazile, William, committee member; Vogl, Mary, committee memberListening comprehension and vocabulary knowledge are closely intertwined. Vocabulary knowledge (size) has been found to be a strong predictor of successful listening comprehension even when listening is done under adverse conditions. Previous research has focused on advanced proficiency, or native level listeners. This study aims to fill a research gap by studying the improvements to listening comprehension in speech-shaped noise of ten intermediate level French as foreign language learners enrolled at French courses at an American university. This study focuses on whether a 4-hour instruction on diplomatic French vocabulary terms, using a background speech-shaped noise presented at a +5dB signal-to-noise ratio would increase the comprehensibility of unfamiliar accented speech, from nine different speakers in intermediate level learners of French as a foreign language. The results show that intermediate level listeners improved their listening comprehension skills, and that vocabulary training was the most important factor. Findings also show that intermediate-level listeners can adapt to unfamiliar accented speech, and that the listeners can be taught advanced-level vocabulary when it is presented as language for specific purposes and under adverse listening conditions.Item Open Access A two-field finite element solver for linear poroelasticity(Colorado State University. Libraries, 2020) Wang, Zhuoran, author; Liu, Jiangguo, advisor; Tavener, Simon, advisor; Zhou, Yongcheng, committee member; Ma, Kaka, committee memberPoroelasticity models the interaction between an elastic porous medium and the fluid flowing in it. It has wide applications in biomechanics, geophysics, and soil mechanics. Due to difficulties of deriving analytical solutions for the poroelasticity equation system, finite element methods are powerful tools for obtaining numerical solutions. In this dissertation, we develop a two-field finite element solver for poroelasticity. The Darcy flow is discretized by a lowest order weak Galerkin (WG) finite element method for fluid pressure. The linear elasticity is discretized by enriched Lagrangian ($EQ_1$) elements for solid displacement. First order backward Euler time discretization is implemented to solve the coupled time-dependent system on quadrilateral meshes. This poroelasticity solver has some attractive features. There is no stabilization added to the system and it is free of Poisson locking and pressure oscillations. Poroelasticity locking is avoided through an appropriate coupling of finite element spaces for the displacement and pressure. In the equation governing the flow in pores, the dilation is calculated by taking the average over the element so that the dilation and the pressure are both approximated by constants. A rigorous error estimate is presented to show that our method has optimal convergence rates for the displacement and the fluid flow. Numerical experiments are presented to illustrate theoretical results. The implementation of this poroelasticity solver in deal.II couples the Darcy solver and the linear elasticity solver. We present the implementation of the Darcy solver and review the linear elasticity solver. Possible directions for future work are discussed.Item Open Access Internet of things monitoring of the oxidation reduction potential in an oleophilic bio-barrier(Colorado State University. Libraries, 2020) Hogan, Wesley W., author; Scalia, Joseph, advisor; Sale, Thomas, advisor; Ham, Jay, committee memberPetroleum hydrocarbons discharged to surface water at a groundwater-surface water interface (GSI) resulting in violations of the Clean Water Act often spark costly cleanup efforts. The oleophilic bio-barrier (OBB) has been shown to be effective in catching and retaining oils via an oleophilic (oil-loving) geocomposite and facilitating biodegradation through cyclic delivery of oxygen and nutrients via tidally driven water level fluctuations. Conventional resistive (e.g., geomembrane) or absorptive-only (e.g., organoclay) barriers for oil at GSIs limit oxygen diffusion into underlying sediments and are susceptible to overloading and bypass. Conversely, OBBs are designed to function as sustainable oil-degrading bioreactors. For an OBB to be effective, the barrier must maintain aerobic conditions created by tidally driven oxygen delivery. Oxidation reduction potential (ORP) sensors were installed within an OBB in the northeastern US with an internet of things (IoT) monitoring system to either confirm the sustained oxidizing conditions within the OBB, or to detect a problem within the OBB and trigger additional remedial action. Real-time ORP data revealed consistently aerobic oxidation-reduction (redox) conditions within the OBB with periods of slightly less oxidized redox conditions in response to precipitation. By interpreting ORP data in real time, we were able to verify that the OBB maintained the oxidizing conditions critical to the barrier functioning as an effective aerobic bioreactor to degrade potentially-sheen generating oils at GSIs. In addition, alternative oleophilic materials were tested to increase the range of candidate materials that may function as the oleophilic component of an OBB. Materials tested included thin black (232 g/m2), thin white (244 g/m2), medium black (380 g/m2), and thick black (1055 g/m2) geotextiles, as well as a coconut fiber coir mat. Finally, a model was developed to estimate the required sorptive capacity of the oleophilic component of an OBB based on site-specific conditions, which can be used to inform OBB design.Item Open Access The impact of grip strength recovery on grip force accuracy in chronic stroke(Colorado State University. Libraries, 2020) Alam, Tasnuva, author; Lodha, Neha, advisor; Hickey, Matthew, committee member; Yu, Yawen, committee memberDecreased grip force accuracy and grip strength are two well-documented grip impairments that impede upper extremity function after stroke. Grip force accuracy is essential to perform precise motor actions in everyday life. Further, grip strength represents the ability to produce maximal grip force in a short duration of time and constitutes as a hallmark of upper extremity recovery in chronic stroke. Adequate grip strength and grip force accuracy are both important for regaining motor function after stroke. Despite this, no study has investigated whether the recovery of grip strength influences improvements in force accuracy. Purpose: Therefore, the purpose of the study was to investigate the impact of grip strength recovery on grip force accuracy in chronic stroke patients. Methods: We recruited two distinct stroke groups with low (less than 60%) and high (60% or more) grip strength recovery. The grip strength recovery was computed as the percent of paretic grip strength relative to nonparetic grip. A total of thirty-three participants, eleven in low strength recovery group (age 64 ±14.8 years; 6 females and 5 males), eleven in high strength recovery group (age 65.9 ± 9.9 years, 7 females and 4 males) and eleven age matched controls (age 69.6 ± 9.8 years, 4 females and 7 males) participated in the study. To examine the impact of grip strength recovery on grip force accuracy, all participants performed two tasks; 1) maximum voluntary contraction (MVC) and 2) dynamic force tracking task, using each hand. We quantified grip strength as the maximum force produced in the MVC task. Further, we assessed force accuracy by measuring root mean square error relative to the absolute target force. Result: The grip strength recovery in low strength recovery stroke group (27.1 ± 17.7)% was lower compared to the high strength recovery group (92.4 ± 24.9)% and controls (94.9 ±18.9)%. A significant main effect of Group [F (2, 30) = 34.53, p < 0.05, partial ղ2 = 0.69] revealed the grip strength recovery in low strength recovery group was significantly less than the high strength recovery stroke group (p < 0.05) and control (p < 0.05) whereas, the high strength recovery group was not significantly different than the control group (p > 0.05). A significant interaction between Group×Hand, [F (2, 30) = 7.21, p < 0.05, partial ղ2 = 0.33] demonstrated that the relative RMSE of paretic hand was significantly increased in low strength recovery stroke group compared to the high strength recovery (p < 0.05). Importantly, the relative RMSE of paretic hand in high strength recovery group was significantly greater than the control group's non-dominant hand (p < 0.05). Overall, a significant negative relationship between grip strength recovery and paretic relative RMSE (r = -0.598, p = 0.003) was found when investigating correlations in both groups together. In low strength recovery group, we found a negative association between the grip strength recovery and paretic relative RMSE, (r = -0.552, p = 0.078). However, in high strength recovery group, we found no association between the grip strength recovery (r = 0.308, p = 0.357). Conclusion: Grip strength recovery and force accuracy follow differential patterns of improvement for low and high strength recovery stroke groups. In chronic stroke survivors with strength recovery less than 60%, grip strength recovery is associated with grip force accuracy. However, in chronic stroke survivors with strength recovery more than 60%, the grip force accuracy may still be impaired despite near-normal grip strength recovery. After substantial gain in grip strength recovery, interventions that enhance grip force accuracy may be needed to improve the upper-extremity function. Our study results suggest, after improvement in strength, patients need additional interventions such as exergaming that will train force accuracy, to help them use this regained strength more meaningfully.