Browsing by Author "Myers, Sarah J., author"
Now showing 1 - 2 of 2
Results Per Page
Sort Options
Item Open Access Is judgment reactivity really about the judgment?(Colorado State University. Libraries, 2023) Myers, Sarah J., author; Rhodes, Matthew, advisor; Cleary, Anne, committee member; Fisher, Gwen, committee member; Folkestad, James, committee memberA common research tool used to measure one's understanding of their own learning is to collect judgments of learning (JOLs), whereby participants indicate how likely they are to remember information on a later test. Importantly, recent work has demonstrated that soliciting JOLs can impact true learning and memory, referred to as JOL reactivity. However, the underlying cognitive processes that are impacted when learners make JOLs and that lead to later reactivity effects are not yet well-understood. To better elucidate the mechanisms that drive JOL reactivity, I examined how changing the method of soliciting JOLs impacts reactivity. In Experiment 1, I manipulated how long participants had to make their JOLs; in Experiment 2, I compared JOLs made on a percentage scale versus a binary (yes/no) scale; and in Experiment 3 participants were required to explain why they made some of their JOLs. Judgments that require or allow for more in-depth processing (i.e., longer time in Experiment 1, percentage scales in Experiment 2, explaining in Experiment 3) should require more effort from participants to make their judgments. If these more effortful judgments lead to larger reactivity effects, it would suggest that reactivity is driven by processes that occur when making JOLs. However, findings from the experiments did not support this account. Although some differences in reactivity effects were seen after making binary and explaining JOLs compared to percentage JOLs, the hypothesis that more cognitive effort would result in stronger reactivity was not supported. Therefore, results suggest that the mere presence of JOLs during study may cause a general shift in participants' learning approach, resulting in later JOL reactivity.Item Open Access Testing effects for self-generated versus experimenter-generated questions(Colorado State University. Libraries, 2020) Myers, Sarah J., author; Rhodes, Matthew, advisor; Cleary, Anne, committee member; Folkestad, James, committee memberThose familiar with the testing effect (i.e., the finding that practicing retrieval improves memory) frequently suggest that students test themselves while studying for their classes. However, it is unclear whether students benefit from testing if they are not provided with testing materials. Few studies have examined whether generating one's own test questions improves performance, and none of these studies have given participants a full retrieval opportunity. The proposed experiments bridged this gap between testing effect and question generation research by allowing participants to generate questions and attempt to answer those questions after a delay. In Experiment 1, participants generated test questions over passages and either answered their questions as they created them or after a delay. In Experiment 2, participants either generated questions and answered them after a delay (i.e., self-testing), answered experimenter- generated questions, or restudied the material. Both experiments found no benefits of self-testing compared to the other conditions. In fact, those who self-tested tended to have worse final test performance than the other conditions. Analyses of the questions that participants created suggest that students may benefit more from self-testing when they generate more questions and those questions target material that is on the final test. Although further research is needed to confirm these conclusions (e.g., longer delays between study activities and final test), the current study suggests that testing may not always benefit learning if students must create their own questions.