Among the most common criticisms of the Predator Alert Tool project, which is a suite of browser apps and social network add-ins aiming to “build sexual violence prevention mechanisms into every social network on the Internet,” is the claim that, since the tools are unmoderated, they “put you in a worse position” than you were in before they existed. As the Bitter BDSM’er Brigade of whingers tell it:
But these tools (at least the Fetlife tool) is pretty much rendered useless because *surprise* it was flooded with all sorts of stuff, legit and not, which just puts you in a worse position because you can’t tell if someone is actually dangerous or if someone wanted to play a joke or if someone wanted to act maliciously against another.
Beyond the fact that this claim is ridiculous on its face (how does accessing a tool that gives you more information put you in a worse position than not having access to it in the first place?), my collaborators and I have already addressed this criticism several times.
- In a post about the Predator Alert Tool for FetLife called “Tracking rape culture’s social license to operate,” we empirically showed that people who tend to “flood” that tool “with all sorts of [‘illegitimate’] stuff” are also people who are not supportive of sexual assault survivors having access to information with which to keep themselves physically safe from abuse. Moreover, we showed that these floods are actually easy to collate, and we showed how relatively simple it is to reveal the identity of users who flood/misuse the system, even in the tool’s deliberately anonymous context. Since knowing who the unsupportive people are is important information for a survivor to have in order to avoid abusive behavior, and since it is only possible for users (as opposed to just a small set of privileged moderators) to know who these unsupportive people are if their “joke” postings are not moderated, moderating their comments functionally hinders survivors and protects users who behave abusively.
- Then, in a post called, “About Predator Alert Tool for Facebook’s ‘no deletions’ policy,” we explained why and how providing only mechanisms to add new information to Predator Alert Tool databases, as opposed to removing historical information, mitigates the effectiveness with which cyberbullies can appropriate the Predator Alert Tools in order to behave abusively towards their targets. By never implementing “delete” functionality into the code base, we make it technologically infeasible for cyberbullies to use one of their most common abuse tactics: sending abusive messages to a target and then removing those messages before a moderator gets to see them.
- Finally, in the same post, we also explained that moderation systems are costly to run and maintain, both in terms of computing power and in terms of human resources. Since “the people behind” Predator Alert Tool is a rag-tag group of individuals in loose collaboration with one another, making relatively arbitrary decisions about strangers lives seems both impractical and unethical. We can’t even do traditional moderation because we simply don’t have the resources; Predator Alert Tool for Facebook, for example, is accessible to all Facebook users and there are more than one BILLION of them.
But even if we did have the resources to moderate statements related to sexual abuse from a community of one billion people, I would not support doing that because “moderation,” especially in this context, is bluntly a terrible idea.
Francis Tseng recently published a remarkably succinct and clear overview of “moderation” techniques to curb abusive behavior in online communities. It’s particularly noteworthy because it acknowledges the potential of moderation itself to be an enabler of abusive behavior:
[Traditional moderation] is typically realized through a small group of appointed moderators (or even a singular moderator) who scans for “inappropriate” content or responds to content flagged as such by users. She then makes a decision to punish the user or not to, and executes that decision – with or without discussion with fellow moderators.
Naturally, a justice process which does not directly involve members of its community raises suspicion. Nor does it function particularly well. There is a legacy of moderator abuse, favoritism, and corruption where the very system meant to maintain the quality of a group leads to its own demise. Users feel persecuted or unfairly judged, and there is seldom ever a formal process for appeal. In large communities – Reddit’s r/technology has over 5 million users, which has had its share of mod drama – an appeal process may seem impractical to implement. The assurance of the success of such systems is about the same as it is in any where authority is concentrated in one or a few–it’s the same as hoping for a kind despot or benevolent dictator, one that happens to have your interests at heart.
When designing infrastructure for any community, whether it be a multiplayer video game or an internet forum, the power of moderation must be distributed amongst the users, so that they themselves are able to dictate how the community evolves and grows. In this way, judgements of abusive behavior reflect the actual sentiment of the community as a whole, as opposed to the idiosyncrasies of a stranger, as it often is in far-flung and large digital communities.
Put less diplomatically, the Internet has been doing “Report Abuse” wrong because its admins are corrupt.
to change the way people think about bullying, violence, and abuse. Rather than creating an opaque appeal to authority that silences people (such as current “Report Abuse” forms), it sends a radically transparent and contextualized signal boost to friends and supporters of the person who bullies and abusers target. Using Predator Alert Tool [for Twitter, Facebook, et. al.,], the targeted user can ask for help and support at the same time as they are alerting the rest of the [social network’s] user community about behavior they have experienced as abusive.
“Moderation” is a governance tool that may make sense in the context of online communities with a relatively homogenous populace, such as multiplayer video games or topically-oriented forums. But moderation is inherently in conflict with the goal of dissolving authority and dispersing power amongst a heterogenous populace already prone to conflict. There is no system of moderation that is not also a system of social control. And in the context of a project explicitly designed to overcome the iniquities introduced to human experience by traditional mechanisms of social control, adding a traditional mechanism of social control is shortsighted at best and active sabotage at worst.
We realize this is difficult to understand at first. After all, there is currently no physical-world social context wherein we are free from the power of authorities we did not choose and also do not agree with. Everyone has a parent, a teacher, or a boss—even the fucking police. As one PAT collaborator wrote:
We’re all so accustomed to having our spaces monitored and moderated and overseen “for our own safety” that sometimes, when we take the well-being of our communities into our own hands, we appear to be doing more harm than good. That’s only because we’re comparing our efforts to the imaginary “safe” world we’ve been told that we live in, not to the dangerous realities that survivors actually face online and off.
Put another way, from the perspective of a vulnerable populace, namely people who are the targets of rape and physical abuse, a system that erodes the power of central authorities (such as website admins, or the cops) is a move towards safety, not away from it.