Marie McInerney writes:
A new code of practice to address online misinformation and disinformation, developed for and by the digital tech industry, has been slammed by critics as falling way short of the critical action needed to prevent growing harms to public health and democracy.
Leading public health experts, who have fought long battles with food, alcohol, tobacco and gambling industries, warned that voluntary codes rarely work and challenged the tech industry to invite the public health sector to play “a serious role” in efforts to combat disinformation.
The Public Health Association of Australia (PHAA) said Facebook’s move to block Australian news, which sparked a public health emergency by also shutting down multiple health agencies and services across Australia, showed how hard industry will resist regulation that it perceives as a threat.
PHAA CEO Terry Slevin welcomed the general direction of the Australian Code of Practice on Misinformation and Disinformation, including its acknowledgement that the Code must extend to “public goods such as public health”, but said, for the most part, “voluntary codes are an unenforced gossamer-thin veil of pretence, with the objective of avoiding genuine regulation”.
He said:
Seeing the way the ‘Facebook war’ is playing out, it seems unlikely that large digital platforms will sacrifice their advertising revenue or other interests lightly.
“One test of the bona fides of this process will be whether they invite independent, scientific or government public health experts to play a serious role advising on the health risks that online misinformation can cause.
“What we need is a mandatory vigorously enforced code with access to meaningful penalties for breaches, founded on scientific evidence, if we want to see any change in the prioritisation of public health over private profits.”
Slevin said misinformation, whether about health information, the marketing of unhealthy products, or unhealthy directions in politics, is a very real issue.
“This is not just about ‘fake news’ in politics – although that is certainly important. Online misinformation exists as disguised marketing strategies for many unhealthy products, including tobacco and vaping use, alcohol promotion, unhealthy food marketing – especially to children, gambling promotion, and many other goods and services that cause disease and other forms of harm,” he said.
“PHAA is working on many such policy fronts, and digital misinformation is certainly a massive problem.”
Watching carefully
The Digital Industry Group Inc (DIGI) – an Australian peak body representing Google, Facebook, Twitter, Microsoft, Redbubble, and TikTok – developed new voluntary code was developed at the request of the Federal Government.
Perhaps stung by Facebook’s retaliation last week against its mandatory media bargaining code, the Government said it will be “watching carefully to see whether this voluntary code is effective in providing safeguards against the serious harms that arise from the spread of disinformation and misinformation on digital platforms.”
Communications Minister Paul Fletcher said the Australian Communications and Media Authority (ACMA) will report to the Government by 30 June 2021 on the impact of the code and initial compliance, to “guide us on whether further action is needed”.
Reset Australia, the Australian arm of the global Reset initiative to counter digital threats to democracy, was scathing about the new code, saying it “does nothing but reinforce the arrogance of giants like Facebook” and that the Federal Government should not accept the “insouciant contempt” it shows for the Australian public.
It said the real problem with the code was that the algorithms used by Facebook and others “actively promote disinformation, because that’s what keeps users engaged”.
“This limp, toothless, opt-in code of practice is both pointless and shameless,” said Reset Australia executive director Chris Cooper, calling for an independent public regulator of the tech industry with the power to inspect and audit algorithms and to issue fines, notices, and other civil penalties.
It was a call backed by Melbourne University’s Public Health Professor Rob Moodie, who described self-regulation as “a complete smokescreen” and “window dressing”.
“Industry self-regulation virtually never works – why would it? Why would they ever develop rules that would in any harm their bottom line?” he told Croakey, pointing to past lessons from the food, tobacco and gambling industries.
“Whether you use the metaphor of the ‘fox looking after the henhouse’ or the ‘burglars fitting the locks’ it’s the same result – as Reset Australia says it is “a limp, toothless, opt-in code of practice which is both pointless and shameless – or shameful!”, he said, urging also that the tech giants be required to pay taxes in Australia.
“That would be a great start,” he said.
Signing up to safeguards
Under the code, the tech platforms can sign up to a range of actions, including issuing regular warnings to users about the trustworthiness of news articles and advertisements, prohibiting political advertisements that misrepresent or deceive the public and helping users know if they have been targeted by a political party.
A report published last month by Associate Professor Andrea Carson, from the Department of Politics, Media and Philosophy at La Trobe University, found that the global spread of online misinformation has the potential to erode foundational elements of modern civilisation across much of the developed and developing world.
“Social cohesion, public health and safety, political stability and democracy are all under threat by the rapid and sometimes malicious dissemination of false information within and across national borders,” the report said, while noting the potential for anti-misinformation laws and regulations to be misused by governments to undermine freedom of speech and the media.
Releasing the code today, DIGI CEO Sunita Bose said the group had conducted a “robust public consultation process” and believed it had “struck the right balance” in protecting Australians from harmful misinformation online as well as their privacy and freedom of expression.
Signatories would commit to adopting “a range of scalable measures” to combat misinformation and disinformation, such as content labelling and removal, restricting inauthentic accounts and behaviours, partnerships with fact-checking organisations, and technology to help people to check the authenticity of digital content, she said.
Each would commit to publicly releasing an annual transparency report about their efforts, with the first set to be released in May, she said.
You can watch this video and read Bose’s Op Ed on the major provisions of the code.
Lack of teeth, diversity, representation
While the code includes welcome commitments on addressing some of the harms posed by disinformation and misinformation, it has worrying gaps and there are also concerns about the lack of representation in its governance and consultations.
That includes lack of consultation of public health organisations and experts and of communities like Aboriginal and Torres Strait Islander people who bear so much of the brunt of racism on digital platforms, even when one of seven main objectives of the code is focused on research on the harms of misinformation and disinformation.
Deakin University Professor Kathryn Backholer, a Senior Research Fellow in the Global Obesity Centre, a World Health Organization Centre for the Prevention of Obesity, said she is yet to read the report in detail but agreed that “voluntary codes by industry (more generally) time and time again have shown to be ineffective with little accountability and subjective interpretation of rules”.
“This is a real concern. As is the lack of representation by public health and Aboriginal and Torres Strait Islander groups – I am not sure how hard they (DIGI) tried to seek input,” she told Croakey.
Public health consultant Rebecca Zosel agreed that self-regulation undermines governments’ ability to protect the public, often to boost private profits.
“Epic failures of self-regulation in other areas of public health, such as the food industry, have clearly shown that when we allow industry to make and enforce the rules with no oversight by a government entity, the industry has a vested interest in not pursuing the rules with vigour and as a result the health and wellbeing of Australians suffers.”
Arguably, Facebook’s blocking of Australian news meant it could not now meet the code’s objective to enable users to make more informed choices of news sources, said public health promotion specialist Kristy Schirmer.
It’s “currently impossible to fact check Uncle Gary’s meme against any news whilst staying on Facebook”, she said, adding she was also concerned about the complaints process, with decisions yet to be made on how it will be policed and how citizens and institutions can make complaints.
“This is evidence of the weakness of the Code,” she said.
Digital health researcher and consultant Dr Becky White agreed that Facebook’s move to block credible news last week means that some of the objective measures they have signed up to — prioritising credible and trusted news sources and empowering consumers to make better informed choices — “will be very hard to implement, even on the first day”.
“The code being launched on the first day of our vaccine roll-out provided a stark juxtaposition in the inability to share or post news items via Facebook celebrating this milestone,” she said, agreeing there should be mechanisms for public health experts to be part of the code’s review process.
Serious threats to democracy, safety
The Public Interest Journalism Initiative also expressed “deep concern” with both the process and the final iteration of the code.
In a statement, it said the need for an industry code or regulatory control to deal with misinformation “has never been greater” and it welcomed the move towards one.
However, it said this code “falls well short” through its lack of detail and clarity of expectations on its signatories and on the role of public interest journalism to assist in combating misinformation.
The code provided little detail on governance mechanisms, administrative oversight, selection and composition of independent members and complaint handling, and does not include agreed or uniform metrics for reporting nor requirements for record-keeping, said PIJI CEO Anna Draffin.
PIJI also was concerned that harm caused by misinformation was “significantly underestimated” by the definition in the code, which refers to “harms which pose an imminent and serious threat”.
That failed to recognise that the accumulation of small amounts of misinformation disseminated over time “can culminate in serious threats to the democratic process and community safety.”
“The Code appears to be little more than a broad industry statement of intent,” Draffin said.
You can read the 17 submissions responding to the draft code here.
Note from Croakey: this article was amended after publication.