25/04/2020

An Experimental Study of Bias in Platform Worker Ratings

Farnaz Jahanbakhsh, Justin Cranshaw, Scott Counts, Walter Lasecki, Kori Inkpen

Keywords: digital ratings, gender discrimination, social mimicry, bias in ratings, bias in gig platforms

Abstract: We study how the ratings people receive on online labor platforms are influenced by their performance, gender, their rater’s gender, and displayed ratings from other raters. We conducted a deception study in which participants collaborated on a task with a pair of simulated workers, who varied in gender and performance level, and then rated their performance. When the performance of paired workers was similar, low-performing females were rated lower than their male counterparts. Where there was a clear performance difference between paired workers, low-performing females were preferred over a similarly-performing male peer. Furthermore, displaying an average rating from other raters made ratings more extreme, resulting in high performing workers receiving significantly higher ratings and low performers lower ratings compared to when average ratings were absent. This work contributes an empirical understanding of when biases in ratings manifest, and offers recommendations for how online work platforms can counter these biases.

The video of this talk cannot be embedded. You can watch it here:
https://www.youtube.com/watch?v=dsJACNeUYlE
(Link will open in new window)
 0
 0
 0
 0
This is an embedded video. Talk and the respective paper are published at CHI 2020 virtual conference. If you are one of the authors of the paper and want to manage your upload, see the question "My papertalk has been externally embedded..." in the FAQ section.

Comments

Post Comment
no comments yet
code of conduct: tbd

Similar Papers