The health sector must engage proactively with global efforts to strictly regulate the use of artificial intelligence in healthcare, new reports suggest.
Marie McInerney writes:
Yet more alarms have been sounded about the potential for artificial intelligence (AI) in healthcare to cause harm, including through undermining patient privacy and health equity.
An editorial in the 12 August edition of The Lancet urges the medical community to amplify urgent calls for stringent regulation of generative AI.
Regulation should be a key concern of the first major global summit on AI safety, being held in the UK later this year, it says, urging that regulators “act to ensure safety, privacy, and ethical practice”.
As well, it is crucial that health and medicine are well represented on a new high-level advisory body the United Nations is assembling to build global capacity for trustworthy, safe, and sustainable AI, states the editorial.
Noting that Google and other technology companies are already resisting moves to such regulation, the editorial warns that the “tension between commercial interests and transparency risks compromising patient wellbeing, and marginalised groups will suffer first”.
The Lancet also highlights the risks of Lower and Middle Income Countries being reliant on AI developed in the USA and Europe, warning that costs could be prohibitive without open access alternatives.
“At present, the pace of technological progress far outstrips the guidance, and the power imbalance between the medical community and technology firms is growing,” says the editorial. “Allowing private entities undue influence is dangerous.”
Health equity is a particularly serious concern: “Algorithms trained on healthcare datasets that reflect bias in healthcare spending, for example, worsened racial disparities in access to care in the USA.”
Embedded racism
Meanwhile, The Lancet Global Health features research that set out to use AI to subvert stereotypical global health imagery only to unwittingly create “hundreds of visuals representing white saviour and Black suffering tropes and gendered stereotypes”.
Under the project, Arsenii Alenicheva and Professor Patricia Kingori from the Wellcome Centre for Ethics and Humanities at Oxford University and Professor Koen Peeters Grietens from the Institute of Tropical Medicine in Antwerp, Belgium, sought to use various image-generating prompts to create visuals for Black African doctors or traditional healers providing medicine, vaccines, or care to sick, White, and suffering children.
“However, despite the supposed enormous generative power of AI, it proved incapable of avoiding the perpetuation of existing inequality and prejudice,” they wrote.
“Although it could readily generate an image of a group of suffering White children or an image of Black African doctors, when we tried to merge the first two prompts, asking the AI to render Black African doctors providing care for White suffering children, in the over 300 images generated the recipients of care were, shockingly, always rendered Black.”
Displaying the AI generated results in the article, they wrote that prompts for traditional African healers often showed White men in exotic clothing. When they asked AI to generate an image of a traditional African healer healing a White child, with the child shown wearing clothing that appeared “a caricature of broadly defined African clothing and bodily practices”.
“This case study suggests, yet again, that global health images should be understood as political agents, and that racism, sexism, and coloniality are embedded social processes manifesting in everyday scenarios, including AI,” they wrote.
Safety first
Australia’s eSafety Commissioner Julie Inman Grant echoed The Lancet’s warning this week, urging that we learn from the era of “moving fast and breaking things” and “shift to a culture where safety is not sacrificed in favour of unfettered innovation or speed to market”.
Inman Grant released a position statement on generative AI, advising that AI-generated child sexual abuse material and deep fakes are already being reported to investigators.
“This month, we received our first reports of sexually explicit content generated by students using this technology to bully other students,” she said in a statement.
“That’s after reports of AI-generated child sexual abuse material and a small but growing number of distressing and increasingly realistic deepfake porn reports,” she said.
While there are opportunities for generative AI tools to enhance online safety, including through more accurate detection of illegal and harmful content and helping to disrupt serious online abuse at pace and scale, Inman Grant warned that the “danger of generative AI is not the stuff of science fiction”.
“Harms are already being unleashed, causing incalculable harm to some of our most vulnerable,” she said.
AI may also compromise investigations into child abuse, she said.
“The inability to distinguish between children who need to be rescued and synthetic versions of this horrific material could complicate child abuse investigations by making it impossible for victim identification experts to distinguish real from fake.”
The report provides Safety by Design interventions that the online industry can adopt immediately to improve user safety and empowerment.
They include:
- using age assurance measures to identify child users and apply age-appropriate safety and privacy settings
- establishing clear internal protocols for working with law enforcement, support services and illegal content, and
- applying digital watermarking of content, such as embedding a logo, or invisible or inaudible data.
National strategy needed
In June Croakey reported on a consultation by the Department of Industry, Science and Resources on Responsible AI in Australia, noting that it will be important to monitor how public health and health equity considerations are factored in.
Submissions have not been published from that consultation, which is now closed; however the Digital Health Cooperative Research Centre (DHCRC) has published its response to the discussion paper.
The DHCRC supports the 2021 Australian Alliance for Artificial Intelligence in Healthcare (AAAiH) roadmap for AI call for the development of a National AI in Healthcare Strategy to support and encourage collaboration and strategic leadership.
It says responsibility for delivering the strategy should rest with the Department of Health and Aged Care, with the Australian Digital Health Agency and the Therapeutic Goods Administration assuming joint responsibility, adding that it sees no need to establish a new separate regulatory and oversight organisation.
The Australian Medical Association published a Position Statement on AI this month, cautioning that while AI has the potential to benefit healthcare, its clinical and social implications in the healthcare environment remain largely unknown and uncertain.
“In such a fluid and rapidly-expanding environment, the development and implementation of AI technologies must be undertaken with appropriate consultation, transparency, accountability and regular, ongoing review to determine its clinical and social impact and ensure it continues to benefit, and not harm, patients, healthcare professionals and the wider community,” said the AMA.
See Croakey’s archive of articles on AI and health