25/04/2020

How to Trick AI: Users' Strategies for Protecting Themselves from Automatic Personality Assessment

Sarah Völkel, Renate Haeuslschmid, Anna Werner, Heinrich Hussmann, Andreas Butz

Keywords: chatbot, automatic personality assessment, personality

Abstract: Psychological targeting tries to influence and manipulate users’ behaviour. We investigated whether users can protect themselves from being profiled by a chatbot, which automatically assesses users’ personality. Participants interacted twice with the chatbot: (1) They chatted for 45 minutes in customer service scenarios and received their actual profile (baseline). (2) They then were asked to repeat the interaction and to disguise their personality by strategically tricking the chatbot into calculating a falsified profile. In interviews, participants mentioned 41 different strategies but could only apply a subset of them in the interaction. They were able to manipulate all Big Five personality dimensions by nearly 10

The video of this talk cannot be embedded. You can watch it here:
https://www.youtube.com/watch?v=rOCG8B-hHGQ
(Link will open in new window)
 0
 0
 0
 0
This is an embedded video. Talk and the respective paper are published at CHI 2020 virtual conference. If you are one of the authors of the paper and want to manage your upload, see the question "My papertalk has been externally embedded..." in the FAQ section.

Comments

Post Comment
no comments yet
code of conduct: tbd Characters remaining: 140

Similar Papers