19/10/2020

A toolkit for managing multiple crowdsourced top-k queries

Caihua Shan, Leong Hou U, Nikos Mamoulis, Reynold Cheng

Keywords: top-k query, crowdsourcing, query management

Abstract: Crowdsourced ranking and top-k queries have attracted significant attention recently. Their goal is to combine human cognitive abilities and machine intelligence to rank computer hostile but human friendly items. Many task assignment algorithms and inference approaches have been proposed to publish suitable micro-tasks to the crowd, obtain informative answers, and aggregate the rank from noisy human answers. However, they are all focused on single query processing. To the best of our knowledge, no prior work helps users manage multiple crowdsourced top-k queries. We propose a toolkit, which seamlessly works with most existing inference and task assignment methods, for crowdsourced top-k query management. Our toolkit attempts to optimize human resource allocation and continuously monitors query quality at any stage of the crowdsourcing process. A user can terminate a query early, if the estimated quality already fulfills her requirements. Besides, the toolkit provides user-friendly interfaces for users to initialize queries, monitor execution status, and do more operations by hand.

The video of this talk cannot be embedded. You can watch it here:
https://dl.acm.org/doi/10.1145/3340531.3417415#sec-supp
(Link will open in new window)
 0
 0
 0
 0
This is an embedded video. Talk and the respective paper are published at CIKM 2020 virtual conference. If you are one of the authors of the paper and want to manage your upload, see the question "My papertalk has been externally embedded..." in the FAQ section.

Comments

Post Comment
no comments yet
code of conduct: tbd Characters remaining: 140

Similar Papers