Introduction by Croakey: As regular readers of Croakey may know, we regularly use the hashtag #RegulateDigitalPlatforms on Twitter to share news and resources about how public health is being undermined in many ways as a result of the immense market power of Big Tech companies such as Meta and Google.
Below is a summary of recent articles shared at this hashtag, followed by an article published recently at The Conversation that raises concerns about the lack of transparency surrounding “dark advertising” – ads that often are only visible to their intended targets, and disappear moments after they have been seen.
The two articles below make timely reading with the rise of the far right in Sweden, Italy, and other countries, and as the White House pushes for reform of Big Tech companies, with calls to increase transparency about their algorithms and content moderation decisions.
Melissa Sweet writes:
Social media platforms have helped to amplify and embed the misleading claim that United States President Joe Biden did not legitimately win the 2020 election, with worrying implications for democracy and public health, according to a new report.
Titled ‘Spreading The Big Lie: How Social Media Sites Have Amplified False Claims of U.S. Election Fraud’, the report was recently published by the NYU Stern Center for Business and Human Rights at New York University.
Roughly 70 percent of Republican voters tell pollsters that Joe Biden was not legitimately elected president in 2020, says the report, stating that this claim “has evolved from a backward-looking Big Lie to an article of faith in the Republican Party that elections in the United States are generally corrupt”.
The report examines how platforms, including Meta/Facebook, Twitter, TikTok, Instagram and YouTube, have contributed to the spread of this misinformation and failed to deal with it effectively, while noting two caveats: social media platforms are not the sole engine driving election denialism; and the logistical difficulties of grappling with false election claims, especially due to the increasing popularity of video.
Nevertheless, the report says social media companies should be held responsible for the choices they have made, such as farming out content moderation, and should invest more of their “billions upon billions of dollars” of revenue in ameliorating the harms to which they contribute.
“The malady of election denialism in the US has become one of the most dangerous byproducts of social media, and it is past time for the industry to do more to address it,” says the report.
The implications for the upcoming midterm elections are alarming enough, but the report also describes how denialism is disrupting the political ecosystem more widely; for example, through the enactment of laws to make it more difficult for some groups to vote; and through sparking threats of violence against election administrators.
The report’s recommendations below may be useful for health advocates grappling with wider concerns about misinformation and disinformation.
On related matters, see this recent ABC story: How Amazon has ended up funding far-right publishers and disinformation websites.
Predatory climate delay
Climate and Capital Media, a global media company that connects investors and entrepreneurs working on climate change solutions, has published a list of the world’s ten most skilled actors in “predatory climate delay”.
The article cites a 2016 post by writer and futurist Alex Steffen introducing the term “predatory delay”. This is the kind of business as usual, slow walking response we see from the world largest businesses, governments, and financial institutions to climate change. It means inching your way forward with incremental change while, Steffen says, fighting “to delay change of any real magnitude…”
As well as Shell, JP Morgan Chase, Rupert Murdoch and others, Meta/Facebook makes the list because of its “half-hearted efforts to stop selling climate denial ads”.
The report says Meta claimed two years ago that it would crack down on climate disinformation and stop selling ads to climate deniers. But, according to the Center for Countering Digital Hate (CCDH), Meta only found and labelled about half of the posts promoting articles about climate denial in 2021.
“By failing to do even the bare minimum to address the spread of climate denial information, Meta is exacerbating the climate crisis,” says CCDH Chief Executive Imran Ahmed. “Climate change denial flows unabated on Facebook and Instagram.”
Principles for change
On 8 September, the White House brought together experts and practitioners to discuss the harms that tech platforms cause and the need for greater accountability. Attendees identified concerns in six key areas: competition; privacy; youth mental health; misinformation and disinformation; illegal and abusive conduct, including sexual exploitation; and algorithmic discrimination and lack of transparency.
In an accompanying statement, the Biden-Harris Administration announced the following principles for reform:
1. Promote competition in the technology sector. The statement says a small number of dominant Internet platforms use their power to exclude market entrants, to engage in rent-seeking, and to gather intimate personal information that they can use for their own advantage. We need clear rules of the road to ensure small and mid-size businesses and entrepreneurs can compete on a level playing field, which will promote innovation for American consumers and ensure continued U.S. leadership in global technology. We are encouraged to see bipartisan interest in Congress in passing legislation to address the power of tech platforms through antitrust legislation.
2. Provide robust federal protections for Americans’ privacy. There should be clear limits on the ability to collect, use, transfer, and maintain our personal data, including limits on targeted advertising. These limits should put the burden on platforms to minimise how much information they collect, rather than burdening Americans with reading fine print. We especially need strong protections for particularly sensitive data such as geolocation and health information, including information related to reproductive health. We are encouraged to see bipartisan interest in Congress in passing legislation to protect privacy.
3. Protect children by putting in place even stronger privacy and online protections for them, including prioritising safety by design standards and practices for online platforms, products, and services. The statement said platforms and other interactive digital service providers should be required to prioritise the safety and wellbeing of young people above profit and revenue in their product design, including by restricting excessive data collection and targeted advertising to young people.
4. Remove special legal protections for large tech platforms. In the US, tech platforms currently have special legal protections under Section 230 of the Communications Decency Act that broadly shield them from liability even when they host or disseminate illegal, violent conduct or materials.
5. Increase transparency about platform’s algorithms and content moderation decisions. The statement says tech platforms are notoriously opaque despite the profound impact of their decisions about what content to display to a given user and when and how to remove content from their sites. Several participants raised concerns about the rampant collection of vast troves of personal data by tech platforms. Some experts tied this to problems of misinformation and disinformation on platforms, explaining that social media platforms maximize “user engagement” for profit by using personal data to display content tailored to keep users’ attention—content that is often sensational, extreme, and polarizing.
6. Stop discriminatory algorithmic decision-making. We need strong protections to ensure algorithms do not discriminate against protected groups, such as by failing to share key opportunities equally, by discriminatorily exposing vulnerable communities to risky products, or through persistent surveillance.
Meanwhile, this recent article at The Conversation gives an an example of alternative social media platforms.
The article below, as first published at The Conversation under the headline, ‘How dark is ‘dark advertising’? We audited Facebook, Google and other platforms to find out’, reports on a new study that audited the advertising transparency of seven major digital platforms.
The article’s authors are Associate Professor Nicholas Carah, Dr Aimee Brownbill, Dr Amy Shields Dobson, Associate Professor Brady Robards, Professor Daniel Angus, Kiah Hawker and Lauren Hayden.
Nicholas Carah, Aimee Brownbill and colleagues write:
Once upon a time, most advertisements were public. If we wanted to see what advertisers were doing, we could easily find it – on TV, in newspapers and magazines, and on billboards around the city.
This meant governments, civil society and citizens could keep advertisers in check, especially when they advertised products that might be harmful – such as alcohol, tobacco, gambling, pharmaceuticals, financial services or unhealthy food.
However, the rise of online ads has led to a kind of “dark advertising”. Ads are often only visible to their intended targets, they disappear moments after they have been seen, and no one except the platforms knows how, when, where or why the ads appear.
In a new study conducted for the Foundation for Alcohol Research and Education (FARE), we audited the advertising transparency of seven major digital platforms. The results were grim: none of the platforms are transparent enough for the public to understand what advertising they publish, and how it is targeted.
Why does transparency matter?
Dark ads on digital platforms shape public life. They have been used to spread political falsehoods, target racial groups, and perpetuate gender bias.
Dark advertising on digital platforms is also a problem when it comes to addictive and harmful products such as alcohol, gambling and unhealthy food.
Read more: Facebook ads have enabled discrimination based on gender, race and age. We need to know how ‘dark ads’ affect Australians
In a recent study with VicHealth, we found age-restricted products such as alcohol and gambling were targeted to people under the age of 18 on digital platforms. At present, however, there is no way to systematically monitor what kinds of alcohol and gambling advertisements children are seeing.
Advertisements are optimised to drive engagement, such as through clicks or purchases, and target people who are the most likely to engage. For example, people identified as high-volume alcohol consumers will likely receive more alcohol ads.
This optimisation can have extreme results. A study by the Foundation for Alcohol Research and Education (FARE) and Cancer Council WA found one user received 107 advertisements for alcohol products on Facebook and Instagram in a single hour on a Friday night in April 2020.
How transparent is advertising on digital platforms?
We evaluated the transparency of advertising on major digital platforms – Facebook, Instagram, Google search, YouTube, Twitter, Snapchat and TikTok – by asking the following nine questions:
- is there a comprehensive and permanent archive of all the ads published on the platform?
- can the archive be accessed using an application programming interface (API)?
- is there a public searchable dashboard that is updated in real time?
- are ads stored in the archive permanently?
- can we access deleted advertisements?
- can we download the ads for analysis?
- are we able to see what types of users the ad targeted?
- how much did it cost to run the advertisement?
- can we tell how many people the advertisement reached?
All platforms included in our evaluation failed to meet basic transparency criteria, meaning advertising on the platform is not observable by civil society, researchers or regulators. For the most part, advertising can only be seen by its targets.
Notably, TikTok had no transparency measures at all to allow observation of advertising on the platform.
Other platforms weren’t much better, with none offering a comprehensive or permanent advertising archive. This means that once an advertising campaign has ended, there is no way to observe what ads were disseminated.
Facebook and Instagram are the only platforms to publish a list of all currently active advertisements. However, most of these ads are deleted after the campaign becomes inactive and are no longer observable.
Platforms also fail to provide contextual information for advertisements, such as advertising spend and reach, or how advertisements are being targeted.
Read more: ‘Transparency reports’ from tech giants are vague on how they’re combating misinformation. It’s time for legislation
Without this information, it is difficult to understand who is being targeted with advertising on these platforms. For example, we can’t be sure companies selling harmful and addictive products aren’t targeting children or people recovering from addiction. Platforms and advertisers ask us to simply trust them.
We did find platforms are starting to provide some information on one narrowly defined category of advertising: “issues, elections or politics”. This shows there is no technical reason for keeping information about other kinds of advertising from the public. Rather, platforms are choosing to keep it secret.
Bringing advertising back into public view
When digital advertising can be systematically monitored, it will be possible to hold digital platforms and marketers accountable for their business practices.
Our assessment of advertising transparency on digital platforms demonstrates that they are not currently observable or accountable to the public. Consumers, civil society, regulators and even advertisers all have a stake in ensuring a stronger public understanding of how the dark advertising models of digital platforms operate.
The limited steps platforms have taken to create public archives, particularly in the case of political advertising, demonstrate that change is possible. And the detailed dashboards about ad performance they offer advertisers illustrate there are no technical barriers to accountability.
Nicholas Carah is an Associate Professor in Digital Media and Director of Digital Cultures and Societies at The University of Queensland.
Dr Aimee Brownbill is an Honorary Fellow in The University of Queensland School of Communications and Arts and Senior Policy and Research Advisor at the Foundation for Alcohol Research and Education (FARE).
Dr Amy Shields Dobson (they/them) convenes the Digital and Social Media program at Curtin University, on Whadjuk Boodjar.
Brady Robards is an Associate Professor in Sociology in the School of Social Sciences at Monash University.
Professor Daniel Angus is Professor of Digital Communication in the School of Communication, and leader of the Computational Communication and Culture program in QUT’s Digital Media Research Centre.
Kiah Hawker is a PhD student and sessional academic from the University of Queensland (UQ).
Lauren Hayden is a PhD Candidate and Research Assistant, The University of Queensland.
Xue Ying Tan is a Software Engineer, Digital Media Research Centre, Queensland University of Technology.
See Croakey’s extensive archive of articles on digital platforms and health