The unconscious bias in Facebook’s moderation problem
The unconscious bias in Facebook’s moderation problem

The unconscious bias in Facebook’s moderation problem

Following the recent leak of Facebook’s moderation practices, Venessa Paech looks at the many challenges for the platform, and where to next for Facebook and its users

13th July 2017

Facebook’s moderation practices are in the news yet again, as more detail of their playbook was leaked to the public (aka, their users) recently. They highlight alarming inconsistencies and a disdain for duty of care. But they also illuminate the bias at the heart of the problem.

Technology isn’t neutral. Like any other tool, it’s imbued with the assumptions, expectations, politics and predilections of those who create it. It’s shaped by the experience and perspectives those doing the tooling have lived up to that moment. We’re getting better at recognising and responsibly managing unconscious bias in our businesses and organisations. But we’re pretty ordinary at calling it out when it comes to social networks that feature prominently in our daily lives.

The standard push-back to more responsible content and user regulation on Facebook is concern over censorship (driven primarily by a nativist US perspective). This is, of course, a straw man argument. Facebook policies and positions already ‘censor’. Terms of service prohibit certain behaviours and activities. This is the beginning of curating a culture – at the margins.

When Facebook moderators remove a personal picture of a mother breastfeeding, but not a graphic rape or death threat, they’re making choices and creating cultural norms. These interventions are based on an explicit world view and legal lens particular to a small portion of actual users. They don’t consider community context and they ignore the fact they’re already a form of ‘censorship’.

When someone is intimidated or threatened out of participation, censorship is in effect. Women and other institutionalised marginalised groups are routinely drummed out of the virtual room by armies of trolls, men’s rights activists or the loudest hater in the room at any given moment. For some it’s a matter of life and death, where speaking up or speaking out provokes the ultimate attempt to silence – doxing (posting personally identifiable details like address or phone number, with the invitation to harass or stalk that person).

Facebook reporting tools, though better than they once were, still suffer from an either/or, square peg/round-hole problem, with inadequate categories to accommodate the thrilling range of abuse directed at women on the platform. And of course there are the all-too-common cases of people in crisis, (again, usually women), seeking the connectivity and support of their personal social network, but needing to cloak their identity and online activity from abusive parties.

Facebook will tell you repeatedly that they are committed to improving their content reporting capabilities, making it ‘easier’ and ‘faster’ to raise alarm. But reports are still in relation to Facebook’s ‘standards’, and it’s there that unconscious bias needs tackling first. By prioritising profit over harm minimisation, by intentionally refusing to use available technology to quarantine graphic content, and by ignoring their wider unconscious bias, Facebook is complicit.

Twitter founder Evan Williams recently issued a sincere mea culpa for what he now understands as Twitter’s role in mobilising toxic behaviour. I like and admire Evan, but was surprised at the naiveté of his comment: “I thought once everyone could speak freely and exchange information and ideas, the world was automatically going to be a better place. I was wrong about that.” I’m pretty sure most women (and other institutionally marginalised voices) would have had a safe bet what was in store; and some suggestions about better tools to help manage it.

Facebook has taken positive steps to scrutinise and address employer-side unconscious bias. But this doesn’t yet filter through to the platform itself. A diversity of perspectives in the room allows Facebook to more accurately map risk scenarios and desired user journeys. How many women or people of colour were involved when their now infamous moderation playbook was created? How many did they run it by to see if it tracked with lived experience? Were there anthropologists and ethnographers working alongside the lawyers? Diversity makes better products. It also makes safer, more equitable ones. Lessons from community management and the world of social science teach us that when people feel they can disclose without threat, they’re more likely to (ironically, revealing the kind of intimate data Facebook transforms into product, currency and share price).

Social and community professionals manage millions of groups across their platform, not to mention the ‘accidental’ community managers who voluntarily administrate local groups and communities of interest. One way to help scale the mountain of moderation is to engage these people in the creation of tools to manage and create a group culture that reflects their values and their needs. There are signs Facebook is starting to address this, with a ‘Communities Summit’ in Chicago connecting group admins with Facebook staff (including Zuckerberg himself). However, there is a rigorous application process and, for now at least, it’s only open to U.S admins. We watch with interest to see if the conversation is about ‘getting more out of Facebook’, or listening to community management needs with an intent to act.

Imagine where we might be if this decade-old business had engaged community experts and others outside its filter bubble to begin with? Imagine what a commitment to transparency could accomplish on all sides?

 

Swarm Conference, Australia’s only conference for community professionals, is on in Sydney from 30th-31st August. Tickets are on sale now at www.swarmconference.com.au

Originally published on B&T.

Have an opinion?

We'd love to hear what you have to say in the comments below

Comment & Discuss