Marie McInerney writes:
The use of artificial intelligence (AI) in healthcare is generating widespread enthusiasm and hype but a recent conference also heard cautions about its environmental impact and the need for more rigorous evidence and accountability about its benefits and harms.
Concerns about the impact of AI on patient centred care, data privacy and security, equitable access and representation, and for workforce and jobs over the longer-term were also highlighted at a trans-Tasman medical radiation sciences conference on Kaurna Country in Adelaide last month.
The impact of bias is another issue that looms large in AI for medical imaging and radiation therapy, for patients and workforce alike, according to experts at the Australian Society of Medical Imagine and Radiation Therapy’s (ASMIRT) 19th National Conference, hosted in conjunction with the New Zealand Institute of Medical Radiation Technology (NZIMRT).
Johnathan Hewis, Senior Lecturer in Medical Imaging at Charles Sturt University, has been involved in Australian research that confirmed gender and diversity bias of text-to-image generative AI (like ChatGPT) when it was asked to generate a series of individual and group images of radiologists and radiographers.
Hewis told the conference that, while women make up nearly 70 percent of the diverse radiography workforce in Australia, the male gender representation in the AI images generated in the research was 68 percent for radiologists and 55 per cent for radiographers.
More than 90 percent of the images featured light skin tones and very little representation of anyone over 55. There was also a notable lack of body diversity in the portrayal of radiographers and radiologists, he said, “with uniformly idealised or stereotypically ‘fit’ bodies and no visible representation of individuals with disabilities, such as the use of mobility aids, prosthetics, or assistive devices”.
Hewis said this has implications for the way the professions are represented, for example in education, recruitment and for patients and the broader public.
“Images that are not representative of the diversity of the professions risk amplifying historical and institutionalised biases”, he said.
That’s also a risk to equity and health for Aboriginal and Torres Strait Islander people, said Associate Professor Courtney Ryder, an Aboriginal epidemiologist at Flinders University in Adelaide and a member of the AI panel at the conference.
“This is why Indigenous Governance, leadership and knowledge is critical in the development of AI, to decrease this bias and impacts,” she later told Croakey.
The promise of AI, however, “is significant not just for improving health and wellbeing outcomes, but over a range of spheres,” she said.
“Ensuring Aboriginal and Torres Strait Islander driven frameworks and guidelines are essential to this.”
Ryder is leading efforts at Flinders University to design and build SMART-PH (DigitiSing InforMAtion for PRacTice in Public Health) as “the first AI-driven public health platform to address emerging health priorities”, including pandemics and natural disaster.
She told the conference that co-design, with consumers, community and industry, “is very important and central to the work we do”.
Elephant in the room
Given that medical radiation sciences are so technology-reliant, it’s no surprise that annual conferences hosted by ASMIRT have a steady and growing interest in the role, promise and risks of AI. It was the focus of both opening and closing plenaries, as well as other presentations at this year’s event.
But it was only in the final minutes of the closing session that the massive environmental impact of AI come up, via a question from the audience.
Responding to the question, panellist Dr Nick Woznitza cited work published in 2024 in Radiology that looked at “the double-edged sword” of AI and medical radiation technology.
The paper’s authors agreed that AI has the potential to improve environmental sustainability in medical imaging if implemented “judiciously”.
It can shorten MRI scan times with accelerated acquisition times, improve the scheduling efficiency of scanners, and optimise the use of decision-support tools to reduce low-value imaging, the authors said.
But the paper also warned that total emissions from cloud-based data centres that provide the necessary computational power to store and process large volumes of medical imaging data “are now larger than that for the entire airline industry”.
“Sometimes we need to learn when not to do things,” said Woznitza, an Australian consultant radiographer working in the United Kingdom at the University College London Hospitals (UCLH) and AI and radiography researcher at Canterbury Christ Church University.
AI’s carbon footprint has already been dubbed the “elephant in the room” for the technology, including by Facebook AI researchers, according to an article in Nature last year, which declared that the full planetary costs of generative AI are “closely guarded corporate secrets”.
AI’s costs come on top of an already heavy carbon footprint and wide environmental impact in healthcare in general, and in medical radiography and radiology particularly.
Tarni Nelson, a Lecturer in Medical Radiation Science at Charles Sturt University, told the conference that radiology’s environmental impact also includes hospital wastewater (presence of radiographic pharmaceuticals), waste, and low value imaging.
The broader healthcare sector contributes around five percent of greenhouse gas emissions worldwide. Pathology and diagnostic imaging together account for a significant part of that, up to nine percent according to one Australian study.
That broader environmental focus is key, said Associate Professor Courtney Ryder.
“Yes, there is huge energy and water consumption particularly used for training of large language and deep learning models,” she later told Croakey. “Even simple forms of gratitude (“thank you”) back to AI increase energy consumption and carbon emissions.”
“But across the board in healthcare there are significant environmental health impacts (i.e. carbon emissions from pathology testing, anaesthetic gases). We need to look at ways to address these impacts to ensure a safer future for many generations to come.”
Other health impacts are also in play, including how AI can turbo-charge misinformation and disinformation that has the potential to seriously undermine public health, and evidence-informed policy making.
The Lancet warned earlier this year that “misleading social media content pervades information on cancer prevention and treatment; can lead patients to abandon evidence-based treatments in favour of influencer-backed alternatives; downplays the seriousness of mental health conditions; and promotes unregulated supplements claiming to work for everything from weight loss to reversal of ageing.”
The Consumers Health Forum of Australia (CHF) also hosted a webinar last year (reported here by Croakey) which asked whether AI tools for clinicians and patients are fit for purpose in healthcare. Alongside other issues, it discussed the risks of ‘hallucinations’, where AI gets the information wrong, and the findings of the first national citizens’ jury on AI in healthcare.

What’s the problem?
Opening the panel discussion, Woznitza said there is “a great deal of hype, enthusiasm and promise” in AI for medical imaging and radiation therapy.
What he’d like his profession to start doing is “actually identifying a clear problem, a clinical need…and then seeing if AI is the answer, rather than saying, ‘Ooh shiny new toy, what can we use it for?’.
“I think that’s going to be key, both in terms of meaningful adoption, but also sustainability in healthcare.”
Woznitza also urged delegates to remember they are part of evidence-based professions, saying “anyone can do a ROC curve”, used to assess the overall diagnostic performance of a test and to compare the performance of two or more diagnostic tests.
But “actually every point on a ROC curve is potentially a person”, he warned, calling for an end to ’before and after’ study designs as clinical validation that AI works.
“We are evidence-based professions, that’s part of our licence,” he said. “We need to make sure we’re adopting things using appropriate evidence.”
He urged delegates to question:
“Are we making care better for everybody or are we just magnifying historical biases and amplifying poor outcomes for a certain segment of the population that just hide behind an average?”.
“Are we making it that the people who been privileged throughout human society just get better quicker or get faster care but those people who don’t historically access healthcare may get actually worse (with growing use of AI)?”.
And in an earlier presentation on his role in a national LungIMPACT trial at UCLH and other UK hospitals into whether AI software can reduce the time taken to diagnose lung cancer, Woznitza raised a critical question about the influence of AI verdicts on clinicians: “How do we make sure people only change their minds when they’re wrong?”
Woznitza referred delegates to a landmark UK 2019 review led by “AI guru” US cardiologist Dr Eric Topol on ‘preparing the healthcare workforce to deliver the digital future’.
It found that digital healthcare technologies – genomics, digital medicine, AI and robotics – should be seen “as a new means of addressing the big healthcare challenges of the 21st century”, and that, within 20 years, 90 percent of all jobs in the National Health Service (NHS) will require some element of digital skills.
“Staff will need to be able to navigate a data-rich healthcare environment. All staff will need digital and genomics literacy,” the review said.
But “most importantly”, it said, health system must make sure mechanisms are put in place to ensure advanced technology “does not dehumanise care”.
Research led by Topol was published last month in Nature that suggested that multimodal generative medical image interpretation (dubbed GenMI) “could one day match human expert performance in generating reports across disciplines, such as radiology, pathology and dermatology”.
The paper points out there are still formidable obstacles in “validating model accuracy, ensuring transparency and eliciting nuanced impressions”. But, if carefully implemented, GenMI could “meaningfully assist clinicians in improving quality of care, enhancing medical education, reducing workloads, expanding specialty access and providing real-time expertise”.
Australian health policy expert Professor Stephen Duckett was among those who retweeted the paper, commenting on X: “Good review piece, still early days for AI use obviously, but now might be the time to think about implications for MBS prices and policies on use.”
Many of the implications of AI for medical imaging and radiation therapy were on the agenda at the conference, and in other recent research, highlighting how AI systems can significantly enhance diagnostic radiography by improving diagnostic accuracy and efficiency, for example in stroke detection, breast cancer, brain imaging, and chest reporting.
However, Woznitza warned later that sometimes health technology advances have “just shifted the bottleneck”.
For example, where once radiographers may have imaged five CT patients a day, they now did 50 — that was very welcome on the one hand, but “patients still can’t get in to see their doctor to get the results and so actually the time to diagnosis is possibly the same,” he said.
“All we’re doing is shifting where they wait.”
Work with patients
Waiting time is a major cause of stress and anxiety for patients, said patient advocate and cancer researcher Daniel Johnstone (see our earlier report), who admits to being “massively biased” in favour of AI, particularly generative models like ChatGPT, which he has used extensively.
“It’s removed so much redundancy for me that I can focus on being creative and being curious and asking more questions,” he said, urging delegates to “play with and learn from these tools”.
Saying that “AI without human oversight is malpractice”, he nonetheless urged health professionals to “stop being scared” of it.
Johnstone is confident that AI will “help us find redundancy within systems and practices that will make things more streamlined for the patient, the clinician, the scientist, the technician”.
“We need to obviously proceed with caution but stop being scared of something just because we’re not totally aware of what it is capable of,” he said.
Johnstone particularly sees big potential from AI for health literacy and education, including being able to produce individually tailored communications on cancer compared to “what patients affectionately call ‘death by pamphlet’”, being inundated after diagnosis with information that can be overwhelming.
To do that well means working in partnership with patients.
“You need to be asking them what is useful, because so much technology is built around what is technically appropriate, but not what is socially, contextually appropriate to the needs of the patient,” he said.

More time to care?
Panellists and speakers agreed that medical imaging is going to get faster with AI, but it’s what health practitioners do with the extra time that will be crucial.
There’s no guarantee that time freed up by AI will necessarily give healthcare staff “more time to care”, said panellist Andrew Murphy, a radiographer at the Queensland Children’s Hospital, who told the session his particular focus was on safety and moral-weighted decision-making in AI.
It may just as well lead to “conveyer belt” pressure for greater patient throughput, he said.
Murphy agreed on the need to critically assess AI products and avenues for how they actually benefit patients and the workforce, “rather than just saying ‘we need to go all in or we’ll be left behind’”.
He also urged a continuing professional focus on technology competence, despite what AI can take on, citing the adage of “[wanting] the pilot to understand how to use the plane if the autopilot doesn’t work”.
And, he highlighted the privacy implications of a world where data collections of corporates and systems are regularly breached.
“You have people saying that they have encrypted software, but they’re organising wars on Signal, and that gets screenshot [for public dissemination],” he said, referring to how Trump Administration officials recently included a journalist in a group chat about military strikes in Yemen.
“I think there’s eventually going to be a breach, it’s going to be huge, and it’s going to be a reckoning and we’re all going to pull back quite aggressively…” he said.
Fellow panellist Daniel Sapkaroski was not so concerned, saying it was still possible to set up a locally controlled network so that “data won’t leave the hospital premises”.
“Maybe it’s an ethical dilemma, but I think you can still kind of manage it in a safe manner if it’s done locally,” he said.
Sapkaroski urged immediate AI education of the workforce: “AI will be part of our practice, we need to start teaching now so we’re ready to get in front of it.”
But he also counselled delegates to stop using ChatGPT for CVs and assignments: “You can very clearly tell”.
For Johnathan Hewis, a conference takeaway was that AI, as a disruptive technology, has “huge potential but needs to be evidence-based and driven by clinical need”.
“AI is a just a tool, and like all tools it has strengths and limitations,” he said, “therefore medical radiation sciences professionals need the knowledge and skills to understand and implement AI safely and in co-design with service users.”
Read this X thread from the AI session.
Bookmark this link to follow ongoing Croakey Conference News Service coverage.