Introduction by Croakey: With a rapid increase in the use of digital technology in healthcare, it is critical the technologies are implemented in a careful and safe way, particularly in the area of mental healthcare.
While digital technology provides some benefits for mental healthcare, such as anonymous help-seeking, some challenges exist in establishing these benefits, according to Dr Piers Gooding and Simon Katterl, co-authors of a new report.
Below, Gooding and Katterl outline some of the challenges including privacy and surveillance concerns, whether users are able to provide meaningful consent, and the immense influence of Big Tech interests, including Elon Musk and Meta/Facebook.
They argue for stronger protection of users’ privacy and human rights, accountability and the inclusion of people with lived experience of mental health and healthcare into all stages of development.
“When people use digital mental health technologies, they should be safe and secure,” they write.
Piers Gooding and Simon Katterl write:
Mental healthcare is becoming increasingly digital, giving rise to competing visions for the future of our mental health systems.
In one world, new technologies help expand our best responses to mental distress and crises, with digital technologies selectively introduced to augment care in genuinely helpful ways.
In another future, the worst features of the current system are expanded by new technological possibilities, to create a kind of digital asylum for the 21st century, based on control and surveillance.
The ethical, social, legal and regulatory issues that will shape these possible futures sit at the heart of our recent report, Digital Futures in Mind: Reflecting on Technological Experiments in Mental Health and Crisis Support. In the report, we and our international co-authors bring together research on lesser-known issues about digital mental health technologies.
Technologies in this category are broad – and the list will quickly outgrow our review.
Some technologies are “client/patient/service user-facing”, such as digital therapies, bio-informatics or “personalised” mental healthcare, communication tools, and service-user and citizen informatics that enable people to navigate systems.
Other technologies are profession-facing options, including technologies that help professionals with clinical decision-making, information sharing, and patient or population monitoring and surveillance (for example, those that monitor suicide risks).
Real and potential benefits of these technologies are broad, including allowing more confidential and anonymous help-seeking, addressing geographical inequities in care, providing better information for people to navigate systems, and improving monitoring of services and collection of vital statistics.
There are several challenges to establishing these benefits.
One challenge is to parse away the hyperbolic claims of technology vendors. Elon Musk’s claim that his ‘AI brain chips’ will help ‘solve autism and schizophrenia’ is a good example. Many of the benefits proposed by technology vendors and enthusiastic clinical innovators remain unproven, yet arrive like a rapid fire of silver bullets.
What’s more, many of the proposed benefits carry potential for trade-offs and harms that often remain unstated.
One example is Facebook’s “wellness checks” of which 1,000 were reported to have been conducted in 2018, after the platform expanded its pattern recognition software to detect users who were expressing suicidal intent.
The intervention involved the dispatch of first responders, such as police, after suicidal intent was detected. While involving first responders may provide a positive contribution to public health, there is strong evidence to suggest that police wellness checks can, in many cases, do more harm than good. In the US, for example, the prevalence of deadly police encounters with distressed individuals is striking.
Other issues with this form of digital intervention include the users’ ability to meaningfully consent to mental health surveillance and concerns about how their personal data is managed after an incident.
Questions may also be asked about Facebook’s commitment to people’s wellbeing when some of its executives have reportedly boasted to advertisers that they can target ads to teenage children who felt ‘worthless’, ‘stressed’ and ‘anxious’.
Concerns about surveillance extend to technologies aimed at monitoring people’s “compliance” with mental health treatments. The advent of ‘digital pills’ such as ‘Abilify MyCite’, a product that received approval from the US Food and Drug Administration in 2017, highlights this concern. This technology integrates an electronic sensor into a pill, which transmits information via a patch worn on the arm to an online database, that is meant to indicate whether the person has taken their medication or not.
The consumer must give consent for who can access this information. The medication, which has received approvals in China and the European Union, is targeted at people with specific mental health diagnoses to address ‘the problem of medication adherence’.
Serious questions have been asked about the ethics of doctors being able to spy on distressed individuals in this way.
Surveillance capitalism
One new force shaping the digitisation process in mental healthcare is the business models of surveillance capitalists. Emeritus professor at Harvard Business School, Shoshana Zuboff, characterises “surveillance capitalism” as a market driven process that transforms thoughts, experiences and behaviours into data that is then commodified for marketing purposes.
These commodification processes rely on increased surveillance and data capture, including both data volunteered by the user and the data “passively” collected, often without the user’s knowledge.
An example was noted in a 2019 report by advocates, Privacy International, which looked at 136 popular mental health webpages in the European Union related to depression. Over three-quarters of web pages contained third-party trackers for marketing purposes, which could enable targeted advertising and marketing from large companies like Google/Alphabet, Amazon and Facebook/Meta.
Another example of the link between mental health care and surveillance capitalism relates to a 2021 Bloomberg investigation of a popular mental health app in the US.
In that investigation, ‘Cerebral’, found evidence that it led to overtreatment that generated increased sales of home-delivered psychopharmaceutical prescriptions. Former Cerebral employees reported to journalists that the company prized quantity over quality, involving more patient visits, shorter appointments and more prescriptions.
Another study demonstrated ways that prominent apps tend to over-medicalise states of distress in ways that may over-emphasise ‘individual responsibility for mental wellbeing’.
As with other areas of technological change, these developments need to be governed in ways for which there may be no precedents. Alternatively, it may not be immediately clear how accountability can be enforced, or whether existing or proposed tools are up to the task.
Responsible public governance
Our co-authored report points to urgent areas for consideration and action by developers and governments. Of central importance is elevating the perspectives of people with lived experience into all aspects of this sector.
Our research found that algorithmic and data-driven technologies in ‘online mental health interventions’ almost entirely excluded people who have used mental health services, or those who have lived experience of profound distress, in the design, evaluation or implementation of the proposals in any substantive way.
More research informed by lived experience is needed, including research that is led by people with lived and living experience of accessing and using mental health services.
Obligations under international human rights law suggest that “involvement” must go beyond mere participation as research subjects, to people with lived experience setting the goals, norms and standards of digital support measures, with design and technical specialists partnering with them to make that a reality.
Privacy is another key concern. Our report makes the case that digital encroachment into people’s subjective experiences through extractive data surveillance processes invites a reassertion of privacy rights.
This includes greater control over the use of data and a broader awareness by governments and civil society of the impact of market dominance over individuals who may feel coerced to trade away their privacy rights for fundamental services.
Australia is currently reviewing its privacy laws, and our report argues that ‘data concerning mental health’ is one of the most sensitive forms of health and disability data—get the privacy equation right on data concerning mental health, and everyone will benefit.
Trust in systems often rests on whether people believe there is sufficient accountability built into its processes and culture. Currently, there are few clear mechanisms to scrutinise mental health technologies, their trade-offs and to provide redress when people experience harm.
Ensure accountability
Accountability needs to be built into the lifecycle of these technologies, from design (such as the use of impact assessments), to monitoring (such as fit-for-purpose regulatory oversight bodies) and redress (clear remedies for harm caused and the creation of new regulations). The private and marketised nature of many digital mental health technologies also has clear potential to undermine public accountability.
When people use digital mental health technologies, they should be safe and secure.
A recent BBC report noted that two mental health chatbot apps failed to identify child sexual abuse. Addressing safety and security requires safety planning initiatives being built into technologies and processes.
Our report discusses the way human rights might be integrated into design and governance of digital technologies, with attention to the unique human rights issues in mental health, including coercion and involuntary interventions in mental health services.
Non-discrimination and equity are two other key themes in our report. Algorithmic based interventions can exacerbate existing mental health and disability-based discrimination by encoding and accelerating negative social attitudes.
Moreover, the “digital divide” may also mean that those who are excluded from the use of digital technology, may be further marginalised by the move of services online.
Again, this highlights the need to ensure that people with lived experience must be in leadership positions at all stages of these processes to prevent discrimination and ensure equity.
Our report also details other concerns, such as reinforcing the public interest, enhancing human control of technology, ensuring professional responsibility for practitioners and designers, as well as embedding transparency and explainability of concepts during the future governance of these technologies.
If these considerations are embedded, a future of truly accessible and emancipatory mental health supports, and collective care can be realised.
The full report, Digital Futures in Mind: Reflecting on Technological Experiments in Mental Health and Crisis Support, can be read here.
About the authors
Dr Piers Gooding is a Senior Research Fellow at the University of Melbourne Law School. He is a socio-legal researcher who focuses on disability and mental health-related law and policy.
Simon Katterl is a consumer workforce member who has worked in community development, advocacy, regulation, and law reform. Simon’s work is grounded in his lived experience of mental health issues, as well as his studies in law, politics, psychology and regulation.
This research was partly funded through Australian Research Council funding (No. DE200100483)
See Croakey’s extensive archive of articles on digital technologies and health.