Browsing by Author "Rhodes, Matthew, advisor"
Now showing 1 - 5 of 5
Results Per Page
Sort Options
Item Open Access Framing metamemory judgments: judgments of retention intervals (JORIs)(Colorado State University. Libraries, 2010) Tauber, Sarah K., author; Rhodes, Matthew, advisor; McCabe, David, committee member; Kraiger, Kurt, committee member; Rickey, Dawn, committee memberPrior research has shown that participants’ predictions of memory performance are not sensitive to the time between study and test. However, this work has largely relied in one metacognitive measure, Judgments of Learning (JOLs), to assess such awareness. Thus, in three experiments I explored a new metacognitive measure. Judgments of Retention Interval (JORIs), in which participants determine how long (in minutes) information will be remembered. Results demonstrated that the metacognitive measure itself influences assessments of monitoring and control. For example participants chose to restudy more items when JORIs were made, compared with fewer restudy choices from participants who made JOLs (Experiment 2). However, participants demonstrated difficulty incorporating information about a retention interval into their judgments regardless of the type of judgment made (i.e., JOLs or JORIs). Results are considered within existing theoretical frameworks. I suggest that the metacognitive measure needs to be considered in order to accurately assess metacognitive awareness, and additional work is needed to assess metacognitive awareness of RI.Item Open Access Is judgment reactivity really about the judgment?(Colorado State University. Libraries, 2023) Myers, Sarah J., author; Rhodes, Matthew, advisor; Cleary, Anne, committee member; Fisher, Gwen, committee member; Folkestad, James, committee memberA common research tool used to measure one's understanding of their own learning is to collect judgments of learning (JOLs), whereby participants indicate how likely they are to remember information on a later test. Importantly, recent work has demonstrated that soliciting JOLs can impact true learning and memory, referred to as JOL reactivity. However, the underlying cognitive processes that are impacted when learners make JOLs and that lead to later reactivity effects are not yet well-understood. To better elucidate the mechanisms that drive JOL reactivity, I examined how changing the method of soliciting JOLs impacts reactivity. In Experiment 1, I manipulated how long participants had to make their JOLs; in Experiment 2, I compared JOLs made on a percentage scale versus a binary (yes/no) scale; and in Experiment 3 participants were required to explain why they made some of their JOLs. Judgments that require or allow for more in-depth processing (i.e., longer time in Experiment 1, percentage scales in Experiment 2, explaining in Experiment 3) should require more effort from participants to make their judgments. If these more effortful judgments lead to larger reactivity effects, it would suggest that reactivity is driven by processes that occur when making JOLs. However, findings from the experiments did not support this account. Although some differences in reactivity effects were seen after making binary and explaining JOLs compared to percentage JOLs, the hypothesis that more cognitive effort would result in stronger reactivity was not supported. Therefore, results suggest that the mere presence of JOLs during study may cause a general shift in participants' learning approach, resulting in later JOL reactivity.Item Open Access Testing effects for self-generated versus experimenter-generated questions(Colorado State University. Libraries, 2020) Myers, Sarah J., author; Rhodes, Matthew, advisor; Cleary, Anne, committee member; Folkestad, James, committee memberThose familiar with the testing effect (i.e., the finding that practicing retrieval improves memory) frequently suggest that students test themselves while studying for their classes. However, it is unclear whether students benefit from testing if they are not provided with testing materials. Few studies have examined whether generating one's own test questions improves performance, and none of these studies have given participants a full retrieval opportunity. The proposed experiments bridged this gap between testing effect and question generation research by allowing participants to generate questions and attempt to answer those questions after a delay. In Experiment 1, participants generated test questions over passages and either answered their questions as they created them or after a delay. In Experiment 2, participants either generated questions and answered them after a delay (i.e., self-testing), answered experimenter- generated questions, or restudied the material. Both experiments found no benefits of self-testing compared to the other conditions. In fact, those who self-tested tended to have worse final test performance than the other conditions. Analyses of the questions that participants created suggest that students may benefit more from self-testing when they generate more questions and those questions target material that is on the final test. Although further research is needed to confirm these conclusions (e.g., longer delays between study activities and final test), the current study suggests that testing may not always benefit learning if students must create their own questions.Item Open Access The influence of feedback on predictions of future memory performance(Colorado State University. Libraries, 2013) Sitzman, Danielle Marie, author; Rhodes, Matthew, advisor; Cleary, Anne, committee member; Davalos, Deana, committee member; Robinson, Dan, committee memberThe current experiments explored metacognitive beliefs about feedback. In Experiment 1, participants studied Lithuanian-English word pairs, took an initial test, were either shown correct answer feedback, right/wrong feedback, or no feedback. They then made a judgment of learning (JOL) regarding the likelihood of answering this item correctly on a later test. Participants were tested on the same word pairs during the final test. Although average JOLs were higher for items in the correct answer feedback condition, relative accuracy was impaired. Experiment 2 explored participants' beliefs about feedback by having half of them make JOLs prior to seeing an item (PreJOLs), with only knowledge of whether feedback would be provided. Participants in both the regular JOL and preJOL conditions provided higher average JOLs for items in the feedback condition than items in the no feedback condition; however relative accuracy was decreased for the feedback condition. In Experiment 3, participants went through a procedure similar to Experiment 1 twice, with two lists of word pairs. Metacognitive accuracy did not improve from List 1 to List 2. Lastly, Experiment 4 used scaffolded feedback to increase metacognitive accuracy. Participants corrected more errors if they could generate the correct response with fewer letter cues. However, relative judgments were not more accurate than the previous experiments. In sum, the current experiments suggest that participants may have a general understanding of the benefits of feedback; however, feedback diminishes prediction accuracy for specific items.Item Open Access Veterinary school instructor knowledge and use of study strategies(Colorado State University. Libraries, 2024) Osborn, Rebecca M., author; Rhodes, Matthew, advisor; Cleary, Anne, committee member; Tompkins, Sara Anne, committee member; Balgopal, Meena, committee memberEmpirically supported study strategies have been investigated for years and there has been a growing body of research on what strategies undergraduate students know of and utilize while studying. However, there is less research on instructor knowledge and endorsement of study strategies as they can serve as a guide to students in how to study. Professional schools (e.g., medical, pharmacy, or veterinary schools) have little to no research evaluating what strategies instructors encourage to students while the population of students are meant to be lifelong learners. In the current study, instructors in veterinary medicine were surveyed on their knowledge and endorsement of study strategies including learning scenarios where participants rated strategy effectiveness. The endorsement of study strategies was also correlated with the ranking and acceptance rate of the veterinary school the instructor teaches at to determine if there is a relationship of empirically supported study strategies and the ranking of school quality. The survey found that instructors endorsed both beneficial and nonbeneficial study strategies and learning scenarios but were more likely to encourage empirically supported strategies to students. The ranking and acceptance rate of the school showed no correlation with more endorsement of those beneficial strategies. The results of this survey demonstrate veterinary instructors have a slight preference for empirically supported learning strategies but continue to hold some misconceptions on learning. Further research is needed to determine how best to reach and inform this instructor population, but veterinary instructors are highly motivated to learn more about how best to teach veterinary students.