Introduction by Croakey: Disturbing scenes from last weekend’s “Kill the Bill” rally in Melbourne have highlighted the role of social media in inciting violence and undermining government efforts to combat the COVID-19 pandemic.
Croakey has previously reported extensively on how digital platforms are spreading COVID-19 misinformation and disinformation (see here and here for recent examples).
As a media organisation, we have taken action on this issue through participating in relevant inquiries and other consultation processes and have also encouraged public health advocates and groups to advocate for policies that promote public interest journalism and address misinformation and disinformation harmful to health.
We also believe it is important that individual social media users take action to identify and report material they encounter in their social media feeds that they believe is false, misleading and potentially harmful to health.
Below, Croakey Editor Jennifer Doggett reports on the approaches being taken by digital platforms to address misinformation and disinformation about COVID-19.
And she provides a detailed, practical guide for how users can report false and misleading material.
Jennifer Doggett writes:
Peaceful protests are a right in democratic societies but last weekend’s rally in Melbourne was a far cry from other organised and orderly protests held in the city, such as the March 4 Justice earlier this year.
The rally reportedly included an “apparently operational gallows” and crowds chanting “slogans such as ‘Kill Dan Andrews’ and ‘Hang Dan Andrews’”.
The event was extensively promoted on social media, which according to media reports is now “awash” with images and commentary.
One example is the tweet below, which was liked by Federal MP Craig Kelly, who also attended the rally and addressed the crowds.
This is reminiscent of social media’s role in the storming of the US Capitol in January 2021 where protestors sought to overturn the democratically elected Biden government.
Digital platforms have made some efforts to address false and misleading information. For example, this week Google reported that it has removed three-quarters of the videos posted by the United Australia Party.
But as the protests in Melbourne demonstrate, extremists and conspiracy theorists continue to use social media to spread false and misleading information and to incite violence.
Policies
All major social media platforms have policies on false and misleading information; understanding these policies can help users know what type of material is likely to be removed or restricted when reported.
These policies have been used by all major social media platforms to some extent to reduce false and misleading content on their platforms, including banning and de-platforming some high profile spreaders of misinformation and disinformation.
So far in 2021 Twitter has either temporarily or permanently banned accounts held by a number of people spreading misinformation and disinformation about US election conspiracy theories and QAnon. The most famous case is former US President Donald Trump but others include Lin Wood, Sidney Powell and Marjorie Taylor Greene.
Instagram and Facebook have banned a number of far-right political extremists and extremist organisations such as Alex Jones, Milo Yiannopoulos and Infowars. They also banned the Nation of Islam leader, Louis Farrakhan, for making anti-Semitic statements.
However, is important to note that despite these policies, misinformation and disinformation continue to circulate on social media platforms and that public health and journalism advocates have repeatedly criticised the platforms for failing to prevent the spread of COVID conspiracy theories and anti-vaccine propaganda.
For example, the Bureau of Investigative Journalism has reported that false information on COVID is still circulating widely on social media platforms in India and undermining the effectiveness of public health responses to the pandemic.
Closer to home, the anti-vaccination Tweet below was posted yesterday by Craig Kelly MP.
Facebook and Instagram
Facebook and Instagram (both owned by Meta) say they have a three-part strategy to stop misinformation including:
- Removing accounts and content that violate our Community Standards or ad policies
- Reducing the distribution of false news and inauthentic content like clickbait
- Informing people by giving them more context on the posts they see.
Meta has identified the following steps in their processes to reduce the spread of mis-information:
- Identifying false news – including reports from users of Facebook and Instagram.
- Reviewing content – fact-checkers review content, check its facts and rate its accuracy. This happens independently and may include calling sources, consulting public data, authenticating videos and images, and more.
- Clearly labelling misinformation and informing users about it – including applying a label to content that’s been reviewed by fact-checker.
- Ensuring that fewer people see misinformation – in Facebook this involves making it appear lower in News Feed and in Instagram it can be filtered out of Explore and be featured less prominently in feed and Stories.
- Taking action against repeat offenders – including restricting or removing pages and websites that repeatedly share misinformation.
Facebook have also stated that they refuse to run ads on pages which promote misinformation and are partnering with third-party fact-checkers to review and rate the accuracy of articles and posts. When these organisations rate something as false, Facebook will rank those stories significantly lower in News Feed, which they state cuts future views by more than 80 percent.
Facebook argues that this approach targets people who frequently spread fake stories and “dramatically decreases” the reach of those stories without stifling public discourse.
Twitter has defined the following categories of material as ones which will prompt them to take action:
- Misleading information — statements or assertions that have been confirmed to be false or misleading by subject-matter experts, such as public health authorities.
- Disputed claims — statements or assertions in which the accuracy, truthfulness, or credibility of the claim is contested or unknown.
- Unverified claims — information (which could be true or false) that is unconfirmed at the time it is shared.
More information about how Twitter defines these categories in relation to COVID-19 can be found here.
In response to the pandemic, Twitter broadened its definition of harm to address content that goes directly against guidance from authoritative sources of global and local public health information, including requiring people to remove tweets that include:
- Denial of global or local health authority recommendations
- Description of alleged cures for COVID-19
- Description of harmful treatments or protection measures which are known to be ineffective
- Denial of established scientific facts about COVID- 19 transmission
- Specific claims around COVID-19 information that intends to manipulate people into certain behaviours for the gain of a third party
- Specific and unverified claims that incite people to action and cause widespread panic, social unrest or large-scale disorder
- Specific and unverified claims made by people impersonating a government or health official or organisation
- Propagating false or misleading information around COVID-19 diagnostic criteria or procedures
- False or misleading claims on how to differentiate between COVID-19 and a different disease
- Claims that specific groups, nationalities are never susceptible or are more susceptible to COVID-19.
Twitter also says it prioritises removing content with a clear call to action that could directly pose a risk to people’s health or well-being, and has applied this public interest notice in cases where world leaders violate the COVID-19 guidelines.
YouTube
YouTube has community guidelines, which include a general policy on misinformation covering content “that can cause real-world harm, like promoting harmful remedies or treatments, certain types of technically manipulated content, or content interfering with democratic processes”.
YouTube has also developed specific policies on COVID-19 content that “poses a serious risk of egregious harm.” This is defined as “content that spreads medical misinformation that contradicts local health authorities’ or the World Health Organization’s (WHO) medical information about COVID-19” on:
- Treatment
- Prevention
- Diagnosis
- Transmission
- Social distancing and self isolation guidelines
- The existence of COVID-19.
TikTok
TikTok is the fastest growing social media app and is particularly popular among teenagers and young people. It’s approach to misinformation is included in its community guidelines which state that the platform does “not permit misinformation that causes harm to individuals, our community, or the larger public regardless of intent”.
These guidelines state that users cannot “post, upload, stream, or share” the following”:
- Misinformation that incites hate or prejudice
- Misinformation related to emergencies that induces panic
- Medical misinformation that can cause harm to an individual’s physical health
- Content that misleads community members about elections or other civic processes
- Conspiratorial content that attacks a specific protected group or includes a violent call to action, or denies a violent or tragic event occurred
- Digital Forgeries (Synthetic Media or Manipulated Media) that mislead users by distorting the truth of events and cause harm to the subject of the video, other persons, or society.
What happens when someone reports false and misleading information
Digital platforms do not provide detailed information about how they respond to report of false and misleading information. This is partly because they do not want to provide information which spreaders of misinformation and disinformation can use to get around the methods they use to identify and remove or restrict these posters.
Facebook and Instagram say they use a combination of automated searches using algorithms designed to identify problematic posts and manual independent fact checkers. YouTube says that reports are manually checked by reviewers to determine whether they violate the platform’s policies.
Reporting is anonymous (apart from in some cases of reports of copyright infringement where the platform may need to verify ownership of material).
How to report false and misleading information
The following guides provide step-by-step instructions for reporting misinformation and disinformation on social media platforms as well as taking action to block spam and unsolicited text messages and phone calls.
- Open the post
- Click on the three dots in the top right corner
- Click on “Find support of report post”
- Click on “False Information”
- Click on “Health”
- Click on “Submit”
Instagram
- Click on the post
- Click on the three dots in the top right corner
- Click on “Report”
- Click on “False information”
- Click on “Health”
- Click on “Submit report”
Twitter
- Click on the Tweet
- Click on the three dots in the top right corner
- Click on “Report Tweet”
- Click on “It’s misleading”
- Click on “Health”
- Click on “COVID-19 information”
TikTok
- Go to the video
- Press and hold on the video.
- Select “Report”
- Select the relevant category (there is no specific category for health information)
- Select “Submit”
https://www.tiktok.com/legal/report/feedback
YouTube
It is possible to report YouTube videos, channels, comments, links or ads. The following instructions relate to reporting a specific video but the processes for reporting other YouTube content are similar to this:
- Sign intoYouTube
- Open the video you wish to report
- Click on the three dots below the bottom right corner of the video frame
- Select the reason that best fits the violation in the video (note there is no criteria for health or COVID-19 misinformation)
Unsolicited text messages
Another way COVID misinformation and disinformation about COVID is circulating is through unsolicited direct messages.
Throughout the pandemic, many Australians have received text messages from Clive Palmer and Craig Kelly from the United Australia Party (UAP), and some political commentators have reported that this activity is likely to increase in the lead-up to the next federal election.
There are two major ways people can block unsolicited text messages, such as those from the UAP. The first is to block a specific number which prevents calls and texts from that number from being received by the phone.
To do this:
- Open the Messages app
- Open a message from the caller you wish to block
- Tap on the icon above the name of the account
- Select the “Info” button in the top right corner to open a new screen
- Select the “Info” button on this new screen
- Select “Block this Caller” from the pop-up menu.
On an Android phone this can be done by opening the open the Messages app and touching and holding on a message from an unsolicited caller. This should bring up a menu with a ‘block’ option which can be selected to block the caller from sending messages in the future.
The second method is to set up a phone to block all messages from callers who are not listed as contacts in the phone’s address book. This is done on an iPhone by going to settings and then messages and clicking on the “filter unknown senders” tab.
See Croakey’s archive of stories on digital platforms and public health.