The Immediate Threat of AI Is Not Existential—It Is Bureaucratic

In October 2017, the Kingdom of Saudi Arabia granted citizenship to “Sophia,” a humanoid robot that can mimic social behavior, making it the first robot to receive legal personhood in any country. Although deemed by many a publicity stunt, the Saudi citizenship of a “being” created by a Hong Kong–based company, activated in the United States, and used, among other things, to promote the artificial intelligence (AI) business of a Dutch-incorporated company, creates a weighty quandary—not the least for lawyers, philosophers, policymakers, and AI practitioners. The move creates a plethora of immediate questions, for example, how does Section 9 of the Saudi Citizenship System, approved by the Cabinet via Decision no. 4 dated 25/1/1374, Hijra, apply to Sophia via Article 35 of the Constitution, given that it is not known if Sophia is a she, a he, or an it? (This question is not a small thing given the country’s sex-based discrimination and segregation.) More in-depth questions, though, are right next door—for example, how can Sophia, or any AI for that matter, fit into the legal-conceptual scheme of legal personhood—mainly the rights and duties that legal personality entails? Does Sophia have agency as per the capacity-for-rights view, for example? If so, what is the position of Sophia, as a subject of rights and obligations, vis-á-vis infants, animals, or corporations? Moreover, and as per doctrine, which attributes of legal personality does Sophia have? Capacity? Patrimony? Marital status? It boggles the mind! 

Surprisingly, these questions are comparatively simpler in the case of narrow AI and even generative AI—that is, automated processes with industrial application or internet-dwelling tools that can produce text, images, videos, or other data, often in response to prompts, which, in turn, makes them not fully autonomous. The path turns into a thorny one, though, when speaking about self-awareness systems, machines that would match or surpass the intelligence and ability of the human brain, also known as artificial general intelligence (AGI). And even then, since AGI is a computer system showing similar traits as a human but lacking the capacity of human experience, AGI would be somewhat understandable and predictable. It will not be until AGI gains qualia (subjective, personal conscious experiences) that the AI will rapidly advance through a positive feedback loop of self-improvement and the Intelligent Explosion Model, first proposed by I. J. Good in 1965, will come into effect, irreversibly changing humanity and ushering a “runaway reaction.” Hotly debated terminology and definitions aside, such a breakthrough will clearly mark the moment when we are in the presence of the singularity. 

At this point, many experts would question the velocity with which the technology will continue evolving and reach the point of no return. A simple linear projection based on the speed at which this particular technology has moved, though, can offer clues as to its imminency. Just in 1950, Alan Turing, the father of computer science, asked a simple yet daunting question: can machines think? Only six years later, in 1956, did John McCarthy coin the term “artificial intelligence” at the first AI conference, which was held at Dartmouth College. And what appeared back then to be a sci-fi topic took only a few decades to be the recipient of $20 to $30 billion in investment and have a potential economic output of $13 trillion by 2030, as per McKinsey. That is, in less than the lifespan of a person, science fiction turned into science. And the reaction from public actors has not waited to be triggered, followed by fierce debate and legislation. In 2017, the European Parliament proposed a set of regulations to govern the use of AI, including the granting of “electronic personhood” to advanced machines. Then, in an open letter written to the European Commission in 2018, 150 experts in medicine, robotics, AI, and ethics criticized the move as nonsensical, fearing that such steps would impinge on human rights. Thereafter, without waiting for the newsworthy back-and-forth to subside, governments started to churn out laws such as the Autonomous Driving Act in Germany that entered into effect on July 28, 2021, spurring other countries like France and Japan to roll up their sleeves. 

The legal world is already heading toward chaos. Politicians will always be politicians and bureaucrats will always be bureaucrats. As a rough estimate, five or six laws (including statutes, regulations, ordinances, and amendments) are passed every day in the United States alone. Globally, we are talking about a few hundred per day (and yes, ChatGPT came up with that calculation—so it must be true.) As for AI-specific legislation, the number of federal and state proposed bills, enacted laws, and sector-specific regulations in the United States has surpassed a couple of dozen in less than five years. The sprawling web of governmental brouhaha about AI is in full swing . . . and AGI has not even arrived yet! 

Let’s play a bit of futurism. Let’s remember Robert Conquest’s insightful yet cynical three laws about bureaucracies and organizations. According to the first law, everyone is conservative about what he knows best. The second law says that any organization not explicitly right-wing sooner or later becomes left-wing. And the third law states that the simplest way to explain the behavior of any bureaucratic organization is to assume that it is controlled by a cabal of its enemies. So, if people resist change in areas where they have the most experience, laypeople in power around the world will unleash “change” regarding AI from a bureaucratic and political rather than a technical or ethical stance. Furthermore, and somewhat dovetailing with the latter, AI regulation will be strongly ideological—and the ideology will be “woke,” which is really, really bad news for everyone. And, finally, all institutions handling matters related to AI will act counterproductively or absurdly as if sabotaging themselves from within, driving the rest of us utterly bonkers. 

So, while rosy-eyed transhumanists à la Kurzweil are hopping from one cotton candy cloud to another, and doom-mongering intellectuals and captains of industry spread the vision of a Skynet sending Terminator to enslave us all, we, the mere mortals (the rest of the world!), will drown in a turbulent sea of inextricable and nonsensical rules way before AI can save (or exterminate) us. 


Javier Reyes, PhD, is a university lecturer, lawyer, and full-time pessimist. He lives in Helsinki. His upcoming book The Monkey in the Machine: Is It Ethical to Grant Legal Personality to AGI? (Ethics Press) will be released as soon as he stops slacking and finishes it.

Previous
Previous

The Truth about Fatal Police Shootings Might Surprise You (Especially If You’re a Liberal)

Next
Next

When Kids Say They’re Trans: What to Avoid in Therapists and How to Deal with Schools