Introduction by Croakey: Today, we start an occasional series examining artificial intelligence and algorithms, and their relationship to inequity. We do this in the knowledge that much of what happens with artificial intelligence, or AI, is not disclosed by either the corporate world or the governments elected to serve us. But we aim to explore and, over time, explore more.
The first article is a beautifully written extract from a report to the United Nations Secretary-General by Australian lawyer and academic Philip Alston, who has been the United Nations’ Special Rapporteur on extreme poverty and human rights since 2014. Previously, he was the UN Special Rapporteur on extrajudicial executions. He is currently a professor of law at New York University. Philip has kindly granted permission for Croakey to use this extract.
Other articles in the series will examine algorithms and the problems that can engender, digital justice, digital health and more. But for now, over to Philip Alston.
[divide style=”dots” width=”medium” color=”#dd3333″]
Philip Alston writes:
The era of digital governance is upon us. In high- and middle-income countries, electronic voting, technology-driven surveillance and control, including through facial recognition programmes, algorithm-based predictive policing, the digitisation of justice and immigration systems, online submission of tax returns and payments and many other forms of electronic interactions between citizens and different levels of government are becoming the norm. In lower-income countries, national systems of biometric identification are laying the foundations for comparable developments, especially in systems to provide social protection, or ‘welfare’, to use a shorthand term.
Invariably, improved welfare provision, along with enhanced security, is one of the principal goals invoked to justify the deep societal transformations and vast expenditure that are involved in moving the entire population of a country not just on to a national unique biometric identity card system but also into linked centralized systems providing a wide array of government services and goods ranging from food and education to health care and special services for the ageing and for persons with disabilities.
Digital welfare state
The result is the emergence of the ‘digital welfare state’ in many countries across the globe. In these countries, systems of social protection and assistance are increasingly driven by digital data and technologies that are used to automate, predict, identify, surveil, detect, target and punish. The process is commonly referred to as ‘digital transformation’, but this somewhat neutral term should not be permitted to conceal the revolutionary, politically-driven character of many such innovations. Commentators have predicted ‘a future in which government agencies could effectively ‘make law by robot’, and it is clear that new forms of governance are emerging which rely significantly on the processing of vast quantities of digital data from all available sources, use predictive analytics to foresee risk, automate decision-making and remove discretion from human decision makers. In such a world, citizens become ever more visible to their governments, but not the other way around. See Foucault’s description of panoptic systems, in which those put under surveillance are ‘seen, without ever seeing’ (see Michel Foucault, Discipline and Punish: The Birth of the Prison, New York, Pantheon Books, 1977, p. 202).
Welfare is an attractive entry point not just because it takes up a major share of the national budget or affects such a large proportion of the population but because digitization can be presented as an essentially benign initiative. Thus, for example, the [UK] Government Transformation Strategy proclaims that it is intended to transform the relationship between citizens and the state, putting more power in the hands of citizens and being more responsive to their needs. The core values of the Unique Identification Authority of India include facilitating good governance, integrity, inclusive nation-building, a collaborative approach, excellence in services and transparency and openness.
Presented as altruism
In other words, the embrace of the digital welfare state is presented as an altruistic and noble enterprise designed to ensure that citizens benefit from new technologies, experience more efficient governance and enjoy higher levels of well-being. Often, however, the digitisation of welfare systems has been accompanied by deep reductions in the overall welfare budget, a narrowing of the beneficiary pool, the elimination of some services, the introduction of demanding and intrusive forms of conditionality, the pursuit of behavioural modification goals, the imposition of stronger sanctions regimes and a complete reversal of the traditional notion that the state should be accountable to the individual.
These other outcomes are promoted in the name of efficiency, targeting, incentivizing work, rooting out fraud, strengthening responsibility, encouraging individual autonomy and responding to the imperatives of fiscal consolidation. Through the invocation of what are often ideologically charged terms, neoliberal economic policies are seamlessly blended into what are presented as cutting-edge welfare reforms, which in turn are often facilitated, justified and shielded by new digital technologies. Although the latter are presented as being ‘scientific’ and neutral, they can reflect values and assumptions that are far removed from, and may be antithetical to, the principles of human rights. In addition, because of the relative deprivation and powerlessness of many welfare recipients, conditions, demands and forms of intrusiveness are imposed that would never be accepted if they were piloted in programmes applicable to better-off members of the community instead.
Despite the enormous stakes involved, not just for millions of individuals but for societies as a whole, these issues have, with a few notable exceptions, [such as the work of Virginia Eubanks] garnered remarkably little attention. The mainstream technology community has been guided by official preoccupations with efficiency, budget savings and fraud detection. The welfare community has tended to see the technological dimensions as separate from policy developments, rather than as being integrally linked. Lastly, those in the human rights community concerned with technology have understandably been focused instead on concerns such as the emergence of the surveillance state, the potentially fatal undermining of privacy, the highly discriminatory impact of many algorithms and the consequences of the emerging regime of surveillance capitalism.
Threat of digital dystopia
However, the threat of a digital dystopia is especially significant in relation to the emerging digital welfare state. The present report is aimed at redressing the neglect of these issues to date by providing a systematic account of the ways in which digital technologies are used in the welfare state and of their implications for human rights. It concludes with a call for the regulation of digital technologies, including artificial intelligence, to ensure compliance with human rights and for a rethinking of the positive ways in which the digital welfare state could be a force for the achievement of vastly improved systems of social protection.
The report builds in part on reports made after visits to the United States in 2017 and the United Kingdom in 2018, in which attention was drawn to the increasing use of digital technologies in social protection systems. In preparing the present report, I consulted representatives of various digital rights groups, leading scholars and other stakeholders, first in a meeting hosted by the Digital Freedom Fund in Berlin in February 2019, and then at a meeting sponsored by the Center for Information Technology Policy at Princeton University, United States, in April 2019. In addition, a formal call for contributions resulted in some 60 submissions from 22 governments, as well as international and national civil society organizations, national human rights institutions, academics and individuals in 34 countries, including Australia. While it is impossible to do justice to these rich and detailed submissions in such a necessarily brief report, they are available electronically and I will continue analysing them in the context of my team’s ongoing work on the digital welfare state.