Introduction by Croakey: News that Clive Palmer’s United Australia party has spent nearly $1.2 million on YouTube ads criticising lockdowns and government responses to the COVID-19 pandemic adds urgency to the critical need for action on online misinformation and disinformation in Australia.
Yet the Federal Government has yet to publish or respond publicly to an important report on the effectiveness of the much maligned voluntary Australian Code of Practice on Disinformation and Misinformation that has been on Communications Minister Paul Fletcher’s desk for four months.
Marie McInerney writes:
The Federal Government is being urged to release and respond to a four-month-old report on the performance of a much criticised voluntary code of practice developed by a Big Tech peak body to address online misinformation and disinformation.
Critics of the code, who say it is shameful and pointless to leave such critical regulation in the hands of the industry, want the promised report by the Australian Communications and Media Authority (ACMA) made public urgently, given the ongoing threat of misinformation and disinformation to pandemic control.
Deakin University Professor Kathryn Backholer, a Senior Research Fellow in the Global Obesity Centre, said COVID-19 had shown the harms of misinformation and disinformation, with studies showing online disinformation campaigns are associated with a drop in mean vaccination coverage over time.
“We need the report released as soon as possible,” she told Croakey. “Combating misinformation and disinformation should be regarded a public health emergency.”
Public Health Association of Australia (PHAA) CEO Terry Slevin agreed the ACMA report should be in the public domain and said the risks of Big Tech raise key questions “that have an impact on public health, on the quality of our democracy, on the survival of the planet, now and into the future”. (See more from Slevin below.)
Life and death issue
Reset Australia, an advocacy group working to counter digital threats to democracy, said misinformation and disinformation are “a life and death issue”, which had really been brought to the fore by COVID-19.
Dhakshayini Sooriyakumaran, Reset Australia’s Director of Technology Policy, said it is easy to think Australia will avoid grave threats to democracy, stirred up by disinformation, like the assault on the United States Congress in January this year.
“But all of us likely know someone who might be the victim of misinformation as it relates to COVID, for example, and it’s quite shocking and alarming for all of us to see the impact of that and how it has such devastating consequences for health,” they told Croakey.
Sooriyakumaran said Reset Australia will tomorrow (Thursday) launch a policy memo on: “Facebook Versus Democracy: Is Australia’s election under threat?”, to prompt broader conversations in Australia.
To mark that release, Reset Australia is hosting a Q&A for MPs with high profile Facebook whistleblower Frances Haugen who testifieed to the US Senate earlier this month that the company’s social media platforms “harm children, stoke division and weaken our democracy.”
Croakey hopes to report on that event, which is not open to the media or public but is expected to be made available to journalists as a recording.
Even the Pope this week issued an appeal to the tech giants, as part of a wide-ranging address on COVID-19, vaccine distribution, Black Lives Matter, and climate change, saying:
“In the name of God, I ask the technology giants to stop exploiting human weakness, people’s vulnerability, for the sake of profits without caring about the spread of hate speech, grooming, fake news, conspiracy theories, and political manipulation.”
Communications Minister Paul Fletcher in February welcomed the development of the Australian Code of Practice on Disinformation and Misinformation by the Digital Industry Group (DIGI), an Australian peak body representing Google, Facebook, Twitter, Microsoft, Redbubble, and TikTok.
But, in the wake of Facebook’s disruption of Australian news and public health coverage to warn against other regulation, he warned the Federal Government would be “watching carefully to see whether this voluntary code is effective in providing safeguards against the serious harms that arise from the spread of disinformation and misinformation on digital platforms”.
Fletcher said ACMA would report to the Government by 30 June 2021 on the impact of the code and initial compliance, to “guide us on whether further action is needed”.
That assessment was delivered in June by ACMA but is yet to be made public.
A spokesman for the Minister told Croakey the Federal Government is “considering” the ACMA report” but would provide no further details on whether and when it will be publicly released, and when the Government will likely respond.
Under the code, as DIGI outlines, signatories must commit to safeguards to protect against online disinformation and misinformation, including publishing and implementing policies on their approach, and providing a way for their users to report content that may violate those policies.
Every signatory commits to producing an annual transparency report documenting their efforts under the code’s commitments, the first set of which were released publicly in May.
Earlier this month, DIGI announced it had “bolstered the governance” of the code by appointing an independent Complaints Sub-Committee to resolve complaints about possible breaches by signatories of their code commitments, and a portal on its website for the public to raise such complaints. The independent members of that sub-committee are Dr Anne Kruger, Victoria Rubensohn AM and Christopher Zinn.
It came as Deputy Prime Minister Barnaby Joyce was hitting out at “the unaccountable social media giants that give a platform to trolls and faceless cowards to engage in character assassination” in the wake of “a devastating, soul-destroying, career-ending lie” spread online about his daughter.
In comments provided to Croakey by his office, Minister Fletcher welcomed the DIGI move as “an important development to strengthen the way the code will protect Australians”, noting that online disinformation can cause serious harm, “especially during an event such as the pandemic”. ACMA also welcomed the move.
However, the latest DIGI move has been slammed by critics of the code, with the Centre for Responsible Technology saying it was a “total farce” and “woefully inadequate”, and Reset Australia’s Sooriyakumaran saying the new governance arrangements are “laughable, given the problem they seek to address: Big Tech’s fundamental threat to democracy”.
DIGI has “pulled together some great minds” for their proposed board, but their ability to affect meaningful reform will not be realised without proper regulation, Sooriyakumaran said, adding:
DIGI’s code is not much more than a PR stunt given the negative PR surrounding Facebook in recent weeks.
If DIGI are serious about cracking down on the serious harms posed by misinformation and polarisation then it should join Reset Australia and other civil society groups globally in calling for proper regulation.
We need answers to questions like, how do Facebook’s algorithms rank content? Why are Facebook’s AI based content moderation systems so ineffective? The proposed reforms to the code do not provide this.”
Backholer said said we are yet to see any meaningful reduction in harmful misinformation and disinformation as a result of the code.
“Self-regulatory codes are weak, ineffective and are simply a ploy for delaying meaningful government led regulation. Big tech should not be left to regulate itself – we need government to step up and set the rules with strong and effective sanctions.”
Risks of self-regulation
PHAA’s Terry Slevin responded to Croakey’s question about the ACMA report and the new code provisions. Here are his comments in full:
Generally, both governments and industry bodies like self-regulation.
Industry bodies like it because they get to control it, and it doesn’t get in the way of them going about business as usual. They can continue to sell advertising and maintain profits while putting minimum possible effort and make it look like they are addressing the problem.
Meanwhile — in most instances, the problem continues — or gets worse. And they do not need to knock back revenue from their big customers. And if anyone does grossly transgress, often the communication, message or campaign at fault is long finished, its objectives (eg selling unhealthy products) have been met and things have moved on to the next campaign.
And if fault is found there is rarely a meaningful penalty. And if a real penalty is imposed, the self-regulatory regime rarely has the power to enforce any such sanction. So – BAU (business as usual) — no holds barred. Sweet!
Governments like it because they don’t have to do anything, like make hard decisions. Often decisions that are unpopular with the industries they are regulating, and the need to think through, create and navigate legislation that underpins real regulation.
If governments do look to create real regulation, they are under constant pressure by the industries they seek to regulate to either don’t regulate, or if they do, to regulate in the lightest possible touch.
But as has been shown time and time again self-regulation equals no real regulation. Only governments have the power to set and enforce regulations and apply real penalties if the rules are breached.
Q: Do you welcome this recent move from DIGI to introduce independent oversight and a public complaints facility and/or is it missing important voices?
I have no reason to doubt the intent or integrity of the people named. But – and it is a big but – the true test is: ‘Will it make a real difference?’. I would ask those people to articulate what they think a real difference would look like. What are the metrics? How will we know if it serves a useful purpose? Some simple questions:
- How many people know there is a complaints process?
- How many people make a complaint?
- How easy is it to make a complaint?
- What is the process such complaints follow?
- Is it transparent to all parties?
- What is the criteria to determine what is a legitimate complaint and how that is assessed?
- What proportion of complaints are upheld?
- What are the consequences if a complaint is upheld?
- How are those consequences followed through? Penalties?
We all have an interest in the outcome. Digital communications channels are only becoming more powerful and influential as time rolls on.
Do we want to see greater controls on the extent to which gambling, alcohol, junk food, tobacco, drugs, other unhealthy products are promoted to us, and in particular to kids?
Do we want to see controls on political advertising and marketing so as to avoid a circumstance where those with the most money can generate the greatest noise in an election campaign, regardless of whether the information provided is accurate, fair, legal?
Do we want to have confidence in the health information, information on climate, provided in digital platforms is accurate, based on sound evidence rather than being distorted by commercial or other motives?
These are key questions that have an impact on public health, on the quality of our democracy, on the survival of the planet, now and into the future.
An interesting aspect of this is the extent to which there is self-interest from elected officials. It seems in the political interest of all who seek election to minimise the possibility of leaving control exclusively in the hands of those with the deepest pockets. Some might suggest we have passed that point. The Big Deal (a documentary on Australia’s political lobbying industry) this week points in that direction.
It strikes me that the only way to do this is to have real regulation. Leaving the fox in charge of the hen house rarely works out well for the chooks.
See Croakey’s extensive archive of stories about digital platforms and public health.
Support our public interest journalism, for health.
Other ways to support.