25/04/2020

Improving Humans' Ability to Interpret Deictic Gestures in Virtual Reality

Sven Mayer, Jens Reinhardt, Robin Schweigert, Brighten Jelke, Valentin Schwind, Katrin Wolf, Niels Henze

Keywords: deictic, ray tracing, virtual reality, correction model

Abstract: Collaborative Virtual Environments (CVEs) offer unique opportunities for human communication. Humans can interact with each other over a distance in any environment and visual embodiment they want. Although deictic gestures are especially important as they can guide other humans’ attention, humans make systematic errors when using and interpreting them. Recent work suggests that the interpretation of vertical deictic gestures can be significantly improved by warping the pointing arm. In this paper, we extend previous work by showing that models enable to also improve the interpretation of deictic gestures at targets all around the user. Through a study with 28 participants in a CVE, we analyzed the errors users make when interpreting deictic gestures. We derived a model that rotates the arm of a pointing user’s avatar to improve the observing users’ accuracy. A second study with 24 participants shows that we can improve observers’ accuracy by 22.9

The video of this talk cannot be embedded. You can watch it here:
https://www.youtube.com/watch?v=iUmL_Kek6wU
(Link will open in new window)
 0
 0
 0
 0
This is an embedded video. Talk and the respective paper are published at CHI 2020 virtual conference. If you are one of the authors of the paper and want to manage your upload, see the question "My papertalk has been externally embedded..." in the FAQ section.

Comments

Post Comment
no comments yet
code of conduct: tbd Characters remaining: 140

Similar Papers