The role of social media in spreading fake news

by Digital Admin December 01, 2017

Fake news is a term we have heard a lot in the past two years, especially as Donald Trump campaigned, rose to power and took office in the U.S. This term has now spread internationally and has been cited as an influence in local elections the world over, from the UK to Germany. Collins dictionary recently crowned the phrase as one of the ‘words of the year’ and defines fake news as follows:

fake news, noun: false, often sensational, information disseminated under the guise of news reporting

Prior to social media, we understood fake news as propaganda. While Propaganda has been used throughout history to alter public perception and skew outcomes, the rise of the internet, social media, and a lack of governing control has given a new kind of power to this technological propaganda.

To understand why fake news is so powerful it’s important we look at the different types out there. The Telegraph describes five categories:

  • Commercially-driven sensational content: Sensationalist content with little to no evidence. Clickbait used to drive traffic to sites to generate advertising income. These articles could have titles such as “Trump actually related to Osama Bin Laden” or “Secret North Korean missile sites in the USA discovered”.  BBC reported that an entire village was getting rich in this fashion.
  • Nation state-sponsored misinformation: This is more traditional propaganda by the state or nation. Outlets in Russia, China, and other states are often accused of producing content to swing public opinion, cause division or alter public opinion.
  • Highly-partisan news sites: These sites are often extremely volatile and open in their extreme views. They are usually marketed as ‘anti-mainstream’ and independent of major political parties and big business. This includes sites such as Breitbart News and Milo Yiannopoulos.
  • Social media itself: Twitter bots posting content, Facebook ads, and YouTube content. These are not links outside of social media but are part of the social networks themselves.
  • Satire or parody: Light-hearted publications such as The Onion, Betoota Advocate, and Daily Mash.

While sites such as The Onion and Betoota Advocate are clearly satire, this wasn’t always clear to users, with Facebook testing a “satire” tag in 2014. This was tested following backlash and user confusion on the platform. While the changes don’t appear to have been implemented long-term it does show the unconscious trusting nature of users and the effect this can have in large numbers. The changes were likely not implemented as users became savvier and better able to differentiate satire from real news.

In the wake of the 2016 U.S. presidential election, Facebook CEO Mark Zuckerburg denied his advertising platform had been complicit in spreading fake news and that Russian agents had bought advertisements as part of an attempt to sway the election outcome. In time, he released statements admitting the platform was used in this way.

Mashable reported that in the final three months of the U.S. election campaign fake news had been shared and engaged with more heavily than news from sites such as BBC, The New York Times, Washington Times, and NBC News. Their analysis found that the 20 top-performing false election stories from hoax sites and hyperpartisan blogs generated 8,711,000 shares, reactions, and comments on Facebook while in that same time period, the 20 best-performing election stories from 19 major news websites generated a total of 7,367,000 shares, reactions, and comments on Facebook.

Oxford Internet Institute scholars Philip Howard, Bence Kollanyi and Samuel Wooley call Fake News “computational propaganda” and have cited it in their research as a factor in the 2016 U.S. election result. They explain that through the mobilisation of bots pro-Trump campaigners and programmers carefully adjusted the timing of content produced during the debates, strategically colonised pro-Clinton hashtags, and then disabled activities after Election Day. Their research also shows the timing of this AI mobilisation to align with the swaying results, especially in swing states, leading up to polling day. NPR discusses this effect in further detail, claiming it was largely orchestrated by Russian agents and similar effects were seen in the UK during #Brexit.

Interestingly their research in a recent German election, in which right-wing candidate Angela Merkel was elected, showed that only 20% of news stories shared in the lead up to the election were deemed as junk (Fake News). They attribute this lower level of Fake News to both the higher education levels in Germany and the public financing for several types of professional news organisations which dominated the conversation.

While reach and influence do not directly correlate, and influence is difficult to measure, the impact of exposure to misinformation has been well documented.

Facebook and Twitter don’t generate junk news but they do serve it up to us. We must hold social media companies responsible for serving misinformation to voters, but also help them do better. They are the mandatory point of passage for this junk, which means they could also be the choke point for it. – Philip Howard and Bence Kollanyi (2017).

As for the allegations of Russia, China, and other global powers using their might to sway elections and plant fake news stories? A recent report by Freedom House states that over 30 world governments are mobilising keyboard armies to attack government dissidents and help sway public opinion. The pressure is mounting on social media networks to provide clearer data on government involvement in elections and fake news proliferation. The UK recently ordered Facebook to handover information pertaining to Russia’s involvement in their elections.

Facebook has been rolling out incremental changes including partnering with the Associated Press, Snopes, ABC News and FactCheck.org to check link accuracy, satire and disputed tags for links, as well as the most recent addition, the “More Info” button, however many are critical that these changes are not enough.

Journalists working closely on these changes within Facebook recently spoke to The Guardian, stating that these changes were “way too little, way too late”, that the war against fake news was failing and that their labour has been exploited for a Facebook PR campaign.

Interestingly, Facebook has also just announced a tool for users that will allow them to see if they have been in contact with content created by The Internet Research Agency, a Kremlin-linked ‘research agency’ that is commonly viewed as a Russian propaganda vehicle. While this tool won’t be available until the end of the year or early next year, it is only focused on one content creator and many feel it isn’t enough.

Twitter has made changes to replies and how they’re viewed in an effort to silence bots on the platform as reported by Mashable. These seem to have been effective however Twitter could still be home to up to 48 million bots according to CNBC. In response to this, a bot has even been created to track bots on the network.

While users and critics are unhappy with the slow progress of major networks it’s not surprising these corporate entities are shy to invest large amounts in the exercise, especially if changes potentially lower the levels of engagement on their platforms or alter their feed algorithms to cause less bubble and echo chamber effects. With this in mind, it doesn’t appear the flood of fake news will be slowed anytime soon.