The Challenges of Moderating Online Communities during the 2023 Voice Referendum in Australia
One year on from the 2023 Voice to Parliament referendum, we wanted to reflect on our learnings and experiences from moderating social media channels that were directly and indirectly involved in sharing information about the referendum.
The aim of the Voice to Parliament referendum was to constitutionally recognise Aboriginal and Torres Strait Islander peoples by establishing a formal advisory body. What started out as important national conversations about justice, representation and reconciliation, quickly descended online into discussions that promoted divisiveness and further harmed Aboriginal and Torres Strait Islander communities.
Increased misinformation, disinformation, hate speech and political polarisation made it incredibly challenging to maintain safe online spaces and enable positive discussion of the critical opportunity at hand. While we don’t want to reopen these wounds, there is an opportunity – as social media managers, moderators and digital citizens – to take these learnings and improve how we approach culturally or politically sensitive topics and events.
1. The Volume of Misinformation and Disinformation
Quiip moderated multiple communities across a range of channels for our partners in the lead up to the referendum and in the weeks following. Any posts related to the Voice referendum were inundated with comments. Unfortunately, a significant portion of this content was also inaccurate or false.
Misinformation – misleading or incorrect information spread unintentionally – was rampant. It often arose from confusion over the referendum’s objectives or the legal implications of the proposed changes to the Constitution. For example, some posts falsely claimed that the Voice would give First Nations people veto power over parliamentary decisions or that it would drastically reshape land ownership laws. Spreaders of misinformation would often group together in comment threads and add ‘social proofing’ to these comments.
On the other hand, disinformation – false information spread deliberately to deceive – was weaponised to exploit societal divisions. Some disinformation campaigns seemed orchestrated to stoke fear and create division by playing on cultural anxieties. Whether spread by fringe groups, foreign or local actors, this content often sowed confusion, undermined trust in Australian institutions, and distorted the true nature of the Voice.
Memes, doctored videos and misleading statistics circulated widely, requiring constant fact-checking and flagging by moderators. The sheer volume of this content made it extremely challenging for moderation teams to keep up in real time and comments were disabled on multiple posts.
Key learning: Despite one of the largest grassroots volunteer groups Australia has ever seen – with over 60,000 people volunteering their time to support the Yes campaign – misinformation and disinformation quickly reached a tipping point that was almost impossible to address. Social media platforms have enabled misinformation to flourish and we need to change our approach to how we tackle it. Long-held beliefs around tackling myths head on need to be revisited and organisations need to have a plan for how they will address misinformation.
Quiip recommends an escalating approach of: removing isolated incidents of misinformation early & often; identifying bots & bad actors and creating moderation processes for dealing with them; directly and publicly addressing common misinformation themes calmly and with facts; and providing a tool kit for supporters to address key misinformation themes with friends and family.
2. Bigotry and Hate Speech in Comment Threads
Another major challenge was the spread of bigotry and hate speech in discussions surrounding the referendum. Comment threads, especially on platforms like Facebook, Instagram, and Twitter, saw an influx of inflammatory language. Much of this was directed toward Indigenous Australians, with racist stereotypes and slurs becoming alarmingly common in conversations. Moderators found themselves overwhelmed by the sheer volume of derogatory comments, many of which were flagged by users but could not always be acted upon quickly enough.
The referendum debate drew out the ugly undercurrent of racism that still exists in Australian society, with some individuals using anonymity online to spread toxic rhetoric. Moderating these spaces required not only the removal of blatantly harmful content, but also a more nuanced approach to addressing the underlying biases that allowed such comments to proliferate. For example, statements that appeared neutral or “rational” on the surface often veiled deeply racist assumptions or used coded language to belittle First Nations people and their rights. These were harder to detect and remove without an in-depth understanding of context and culture. Moderation teams were regularly discovering new ways in which users were attempting to avoid moderation and updating their risk management frameworks.
Key learning: For a long time, as moderators, we have quietly removed racism and hate speech, leaning heavily on hiding or deleting comments. Moderating the Voice Referendum highlighted that a more proactive approach is necessary to maintain online spaces that are productive and psychologically safe. This would require organisations to embrace an actively anti-racist approach, calling out bad behaviour and reinforcing that it isn’t welcome in moderated spaces.
3. Polarisation and the Impact on Moderation
The political polarisation surrounding the Voice referendum made moderating online discussions even more challenging. Both supporters and opponents of the Voice often accused each other of bad faith arguments, leading to heated exchanges that moderators struggled to contain. Well-intentioned discussions quickly devolved into personal attacks as users became increasingly entrenched in their views.
Moderators were caught in a difficult position: we needed to foster free expression and healthy debate while ensuring that conversations remained respectful and free from harassment. Striking this balance proved particularly hard given the emotionally-charged nature of the referendum. Many users claimed censorship when their posts were removed, or swarmed unrelated posts to demand comments be reopened when closed, further complicating efforts to maintain healthy discourse.
Key learning: Better documented processes and guidelines for tackling contentious debate – for example what is and isn’t allowed and how moderators can firmly communicate that – would have provided a good foundation for regulating heated arguments.
4. Insufficient Resources for Moderators
A common issue throughout the referendum period was that moderation teams were often under-served by the tools that platforms provide. The sheer scale of content related to the Voice referendum required significant human resources, especially during peak times leading up to the vote. However, many platforms relied heavily on automated systems for moderation, which often failed to catch nuanced forms of hate speech, sarcasm or disinformation. Human moderators, meanwhile, were overburdened, leading to slower than usual response times and missed opportunities to address problematic content before it spread widely.
The emotional toll on human moderators was also significant. Constant exposure to hateful content, racist rhetoric, and disinformation can lead to burnout and even trauma, particularly for Indigenous moderators who had to confront attacks on their identity and culture in real time. Many clients took their cues from Indigenous moderators, but did not require these team members to work through the referendum to avoid heightened exposure to negativity and hate speech.
Key learning: Moderation around the Voice Referendum was emotional labour that was both mentally and time intensive. Having allies moderating comments and providing wellbeing safeguards for those moderators is critical. Some of the wellbeing safeguards can include shorter moderation shifts with scheduled breaks, debriefing and access to an Employee Assistance Program, encouraging self-care and positive activities following moderation work, and adequately resourcing moderation teams in terms of staff and hours.
5. The Role of Platforms and Future Solutions
The 2023 Voice referendum underscored the need for more robust solutions to moderate online communities effectively. Social media platforms must take greater responsibility in curbing the spread of harmful content by investing in better tools and training for both AI-driven moderation and human teams. Collaboration with Indigenous groups, cultural experts, and fact-checking organisations could also enhance moderation efforts, ensuring that misleading or harmful content is caught earlier and handled with greater sensitivity.
Looking forward, platforms should consider adopting clearer policies on disinformation and hate speech that specifically address the needs of marginalised groups during political debates. They also need to support public education campaigns that promote media literacy and help users distinguish between factual information and deceptive content.
Communities should be adopting an anti-racist stance, not just a non-racist one (to borrow a phrase from Luke Pearson). It’s often not enough to just hide misinformation or disinformation – it needs to be addressed and exposed for what it is.
Key learning: Social media platforms need greater accountability around stopping the spread of misinformation and hate speech. Currently not enough is being done by the large tech companies and much of the onus is put on organisations that are operating in those spaces. Unfortunately, regulation takes time and despite good intentions is often riddled with loopholes. In the interim, Quiip will continue to advocate for greater trust and safety mechanisms on social media platforms.
Conclusion
The 2023 Voice referendum revealed the darker side of public discourse in the age of social media. It brought to the forefront that Australia has a long way to go in reconciling our colonial past with a future of respect, justice and truth-telling that benefits all Australians.
Misinformation, disinformation and bigotry flooded online platforms, making moderation a daunting and often thankless task. As Australia – and the world – continues to grapple with an increasingly divisive political environment, the lessons from the Voice referendum highlight the urgent need for stronger, more proactive measures to foster healthier online communities.
Moderating these spaces effectively requires not just better technology, but also a deeper commitment to empathy, education, and understanding in our shared digital spaces.
At Quiip, we are heartened by the six million Australians who voted “Yes” and will continue to focus our efforts on connecting, protecting and supporting communities online. Get in touch to find out how Quiip can protect your community.