Taking Action Against Racism in Online Communities

by Lauren Piro May 31, 2023

As Australia approaches our first referendum in over 20 years, registered voters will be asked to decide on a constitutionally recognised First Nations Voice to Parliament. Australians are increasingly turning towards social media as a source of news and information, and with this comes an increase in public discussions – not all of it nice. In fact, the racism and hate speech has already begun, emboldened by the relaxed scrutiny of some social media platforms and climate of debate. 

While content will be largely driven by news media and government institutions, it will almost certainly filter down to other groups and pages. If your brand intends to make a statement of support or publish related content, there is a responsibility to manage the comments that ensue. However even if you don’t plan to address the Voice Referendum directly, it helps to be prepared for errant threads that arise.

In social media and online communities, inviting engagement means strategically navigating the boundary between encouraging open discussion and enforcing behavioural rules. In their guidelines, most pages will make it clear that prejudiced and offensive comments are not welcomed. But beyond removing obvious hate speech, there is the deeper, values-based element of cultural moderation to consider.

Venessa Paech explains, “There are two main types of moderation your online community manager should be thinking about: regulatory, and cultural. Regulatory moderation means the actions that help you stay compliant with laws, regulations and official guidance from government and other institutions (such as rules around hate speech, defamation and cyberbullying). Cultural moderation covers content and behaviour not necessarily regulated by law, but deeply relevant to your unique community or group and its social norms.” 

If your organisation values diversity and inclusion, there is more at stake than just legal compliance.

It’s often said the standard we walk by is the standard we accept, and social media is no exception. Where racist comments flourish unchecked, it becomes clear to marginalised people that this is not a psychologically safe space, thereby actively limiting who will participate.

Automated tools like keyword filters can be helpful in capturing slurs, but may not be sophisticated enough for microaggressions and other racist inferences. They also tend to hide comments about racism that actually contribute positively to a discussion. Human moderation, particularly from professional moderators who are trained and briefed on what to look for, provides a vital layer of community protection. 

Quiip has experience working within high risk spaces, including campaigns that invite discussion on sensitive topics like racism. We partnered with Network 10 to provide Facebook moderation during the airing of the documentary The Final Quarter about Adam Goodes. In 2016, we supported Beyond Blue with moderation on posts for the Invisible Discriminator campaign, which highlighted the emotional impacts of subtle or casual racism on First Nations Peoples. 

Currently, Quiip provides after-hours moderation for SBS, including NITV and other branded channels. Dan De Sousa, who leads Quiip’s SBS moderation team, shares his insights into working with sensitive and high risk accounts:

“The SBS and NITV Facebook pages attract comments from a wide audience, both positive and negative. One of the biggest challenges for these pages is keeping the comment sections welcoming and safe for everyone, particularly the primary audience for NITV. 

Our focus on these pages is highly attentive, requiring quick moderation of any conversations that cross the line into racist commentary. We use the in-built tools that Meta Business Suite provides, with the option to ban or block people from commenting any further. Our team will allow conversations that debate a topic respectfully, but if it veers into racism, misinformation or trolling, it’s on us to clear those comments from the thread. These actions ensure the audience still have a place they can discuss issues relevant to them without the emotional burden that racism can carry for the targets of those comments. 

Often we may see posts, some years old, become subject to a wave of comments that all cross that line, so being able to swiftly discern which posts need our focus is a valuable skill. We may not always be able to pinpoint why a particular post has suddenly attracted this attention, but we’re the front line making sure those comments are not able to derail SBS’ intent in providing information for the community.”

Key takeaways:

  • Resource properly, around the clock with skilled community/social media managers. Or limit posting when community management is unavailable.
  • Moderate proactively and reactively.
  • Utilise platform tools, whilst being aware that filters are not sophisticated.
  • Ensure you have robust governance frameworks, including moderation and community guidelines, response protocols and escalation frameworks.
  • Adhere to the Australian Community Managers’ Code of Ethics.

If you are interested in moderation support for campaigns related to the Voice Referendum, or any other potentially sensitive event, head to the Contact page and let’s chat!