Introduction by Croakey: The case for increased regulation of digital media platforms has been strengthened by a number of recent reports highlighting their role in undermining democratic processes.
These include revelations that Facebook repeatedly ignored internal advice that political leaders in 25 countries around the world were using the platform to deceive voters and harass opponents.
Closer to home are reports that the Australian Electoral Commission has asked Facebook to provide information about the multiple pages and profiles set up by Queensland MP Andrew Laming which masqueraded as news and community groups while providing politicised information supporting the Liberal National Party.
It’s also worth noting that social media is not the only form of media spreading dangerous mis and dis-information to achieve politically motivated outcomes.
In evidence given this week to the Senate Environment and Communications References Committee as part of its inquiry into the state of media diversity, independence and reliability, a world leading climate scientist outlined the role of the mainstream Murdoch media empire in spreading misinformation on climate change.
Speaking in a private capacity, Professor Michael E Mann, Director of the Earth System Science Center at Pennsylvania State University, told the committee that the Murdoch press has served as a “megaphone” for climate disinformation and had “worked extremely hard to…attack the facts and to undermine public faith in factual discourse”.
These revelations are particularly concerning given the findings of a recent nationally representative survey that most Australian adults had a low level of confidence in their media literacy abilities, including in areas such as checking if information online is true and checking if a website can be trusted.
The health implications of a poorly regulated social media sector are particularly important to consider in the context of the global COVID-19 pandemic.
While Australia has generally high rates of vaccine acceptance, there are growing concerns among some doctors and health experts that COVID-19 mis and dis-information could affect the successful roll-out of the vaccine program, in particular among vulnerable groups.
The Royal Australian College of General Practitioners recently reported concerns being raised by GPs about this issue, including GP Dr Tanya Schramm, a Palawa woman and Chair of the Expert Committee behind the COVID-19 clinical recommendations for Aboriginal and Torres Strait Islander people. Dr Schramm described COVID-19 misinformation as “rampant” and said that social media has played a central role in its dissemination.
These concerns are borne out by research from Reset Australia, a policy and advocacy organisation contesting the digital threats to democracy, which has found that vaccine hesitancy among Australians is growing.
In the post below, Matthew Nguyen, Policy Lead for Reset, draws on his background in public health to analyse the role of social media platforms in promoting mis/dis-information on COVID-19 and suggests some practical policy and regulatory measures to combat their negative health impacts.
Matthew Nguyen writes:
There are corners of the internet raising the alarm that Federal Health Minister Greg Hunt wasn’t in hospital with cellulitis.
Premier Dan Andrews didn’t hurt his back by slipping on wet stairs either.
Actually these two politicians are among the Australians who have been struck down by devastating side effects from the COVID-19 vaccine, not that anyone in the government will admit this.
Of course none of this is true and in terms of tackling a pandemic, rebuking these spurious claims might seem low on the list of priorities.
Vaccine hesitancy growing
But Australia’s vaccine hesitancy rates are growing – between October 2020 and February this year, the percentage of Australians willing to get the COVID-19 jab fell by 8.2% from 74% to 66%.
This has coincided with a surge of conspiracy and rumour on Australian Facebook groups as the government launched its COVID-19 vaccine roll out.
In response to the surge in anti-vaccination activity the federal government has launched a team of 30 myth-busters, who will take on celebrities and influencers peddling false COVID-19 facts.
This is well-intentioned but ultimately it will do little to turn the tide of misinformation.
What is actually needed is transparency about the extent of the misinformation problem, because right now our current response to rampant falsehoods resembles blind-folded whack-a-mole.
Too often we mistake misinformation as a few prominent voices promoting hydroxychloroquine, ivermectin or pushing back against public health measures.
That is not where the danger of misinformation lies.
The real dangers
The real danger of rampant vaccine hesitancy and scientific scepticism is tucked away in algorithm-created bubbles of Facebook, YouTube and Twitter, where ideas fester and spread, unseen and unchecked by mainstream conversation.
The more you engage with the conspiratorial, the more the algorithms will send you down the rabbit hole and connect you with others who share your world view.
In fact, studies now suggest once someone is lodged in this cycle of suspicion fact checking of misinformation will only reinforce their views.
This is a real challenge for the government’s myth-busters, and anyone else trying to tackle COVID-19 misinformation. Not only are these people difficult to find and reach, but if you do they are likely to be highly resistant to factual information.
Big Tech didn’t set out to create these problems. But so far their labelling posts as fake and deplatforming prominent spreaders has failed to truly tackle the root of the misinformation problem.
It is the business model itself, which relies on algorithms to keep us online and engaged, that fuels misinformation.
To date Big Tech has been completely unwilling to acknowledge the role algorithms play, or to turn down the engagement metrics that funnel us into echo chambers.
A “limp toothless” code
Take the recent industry-written code to tackle misinformation and disinformation that DIGI – the peak body representing Google, Facebook, Twitter, Microsoft, Redbubble, and TikTok in Australia – released in February.
This limp, toothless, opt-in code ignores the systemic way algorithm curated online spaces turbocharge misinformation, instead putting the onus on consumers to make “better informed choices”.
DIGI’s code has been in development for over a year, but it didn’t contain any real governance or enforcement measures.
If this kind of regulatory regime was offered by any other industry it would be laughed out of town.
Big Tech has gotten away for too long as a magician that won’t reveal her tricks. But there is no special reason Big Tech’s services can only work under a self-regulation model.
In fact, it’s clear from the industry’s response to the News Media Bargaining Code and their half-baked misinformation code they will have to be dragged kicking and screaming to act in the public interest.
Government intervention is needed to reign in the harms of Big Tech and those working in public health have an important role to play when it comes to better understanding the harms of misinformation.
What is measured matters
As is often said, what gets measured gets done. So if we want to meaningfully tackle misinformation, first we should begin by measuring it.
In a world driven by data, where we can track COVID-19 fragments in sewage, monitor health capacity right down to the number of ICU beds available, and contact trace entire populations through an app, we should also be able to see what kind of information about COVID-19 is gaining traction online.
Right now Google, Facebook, Instagram, Twitter and others are sitting on a treasure trove of data about what is being shared in various bubbles. They know what conspiracy theories are taking off, where they’re taking off, and which vulnerable population groups they’re manipulating.
At the press of a button these tech giants could release this information to the rest of us. We should be demanding they do so.
The need for a “live list”
Reset Australia, the local affiliate of the global body fighting digital harms to society, is building a coalition of public health bodies to advocate for a live list of the most viral COVID-19 URLs.
The Doherty Institute, Immunisation Foundation of Australia, Immunisation Coalition, Coronavax, Australian College for Infection Prevention and Control, Islamic Council of Victoria, Asian Australian Alliance, and the Australian Muslim Advocacy Network, have all joined our call on government to mandate a live list. We encourage more public health bodies to join us.
In the early stages of our vaccine roll out, a transparent log around what types of Covid-19 misinformation is circulating amongst our communities would be a vital resource for decision makers, journalists, academics and public health officials to track misinformation.
It wouldn’t demand content be removed but rather would give us all a better bird’s eye view of the types of misinformation out there, and how the public health narrative can better counter it.
False information that spurs hesitancy, fear and ultimately resistance is the largest threat to comprehensive vaccine distribution in our country.
We need the tools to be able to measure this parallel information pandemic. The Live List is this tool.
See here for previous Croakey reports on the regulation of digital media platforms.
Support our public interest journalism, for health.
Other ways to support.