The impact of referent display on interaction proposals during multimodal elicitation studies
Date
2021
Authors
Williams, Adam S., author
Ortega, Francisco R., advisor
Beveridge, Ross, committee member
Sharp, Julia, committee member
Journal Title
Journal ISSN
Volume Title
Abstract
Elicitation studies have become a popular method of participatory design. While traditionally used for finding unimodal gesture-based inputs elicitation has been increasingly used for deriving multimodal interaction techniques. This is concerning as there has been no work that examines how well elicitation methods transfer from unimodal gesture use to multimodal combinations of inputs. This work details a comparison between two elicitation studies that were similar in design apart from the way participants were prompted for interaction proposals. Referents (e.g., commands to be executed) were shown as either text or animations. Interaction proposals for speech, gesture, and gesture+speech input modalities were elicited. Based on the comparison of these studies and other existing elicitation studies the concern of referent display priming uses proposed interaction techniques is brought to light. The results from these elicitation studies were not reproduced. Gesture proposals were the least impacted. With high similarity in the overall proposal space. Speech was biased to have proposals imitating the text as displayed an average of 69.36%. The time between gesture and speech initiation in multimodal use was 166.51% longer when prompted with text. The second contribution of this work is a consensus set of mid-air gesture inputs for use with generic object manipulations in augmented reality environments. This consensus set was derived from the elicitation study that used text-based referent displays which were found to be less biasing on participant gesture production than the animated referents.
Description
Rights Access
Subject
elicitation
human-computer interaction
multimodal
gesture
augmented reality
interaction