07/06/2021

On Positive Moderation Decisions

Mattia Samory

Keywords: Qualitative and quantitative studies of social media, Subjectivity in textual data, sentiment analysis, polarity/opinion identification and extraction, linguistic analyses of social media behavior, Text categorization, topic recognition, demographic/gender

Abstract: A crucial role of moderators is to decide what content is allowed in their community. Though research has advanced its understanding of the content that moderators remove, such as spam and hateful messages, we know little about what moderators approve. This work analyzes moderator-approved content from 49 Reddit communities. It sheds light on the complexity of moderation by giving empirical evidence that the difference between approved and removed content is often subtle. In fact, approved content is more similar to removed content than it is to the remaining content in a community---i.e. content that has never been reviewed by a moderator---along dimensions of topicality, psycholinguistic categories, and toxicity. Building upon this observation, I quantify the implications for NLP systems aimed at supporting moderation decisions, which often conflate moderator-approved content with content that has potentially never been reviewed by a moderator. I show that these systems would remove over half of the content that moderators approved. I conclude with recommendations for building better tools for automated moderation, even when approved content is not available.

 0
 0
 0
 0
This is an embedded video. Talk and the respective paper are published at ICWSM 2021 virtual conference. If you are one of the authors of the paper and want to manage your upload, see the question "My papertalk has been externally embedded..." in the FAQ section.

Comments

Post Comment
no comments yet
code of conduct: tbd Characters remaining: 140

Similar Papers