AI Needs to Have an Electronic Legal Personality

A 2017 European Parliament draft report on civil law rules on robotics, written by MEP Mady Delvaux, made a series of commonsense proposals. Her bad luck was a tiny paragraph, 59(f) to be precise, where she proposed that for autonomous systems, it should be at least considered to grant them status as “electronic persons” with rights and obligations—all aimed at making it easier to assert a user’s rights in court. Hilarity ensued. In an open letter signed by 150-plus so-called experts, the pearl clutching reached paroxysm when they argued that such e-person would be “ideological, nonsensical, and non-pragmatic.” Furthermore, the letter’s signatories used the trope that Delvaux’s understanding of autonomous systems was cemented on science fiction—and argued that e-personhood would grant human rights to machines and asserted that corporations cannot serve as blueprint because “no humans are behind the machine,” as is the case with companies. Eurocrats freaked out as a result, of course, and the idea was discarded.

These “experts” simply did not have their fingers on the pulse of the industry in which they claimed expertise, mind you. This was nothing short of strange, though, seeing that in 1986, Marvin Minsky predicted that human-level artificial intelligence (AI) would arrive in 2020. Then, in 1999, Ray Kurzweil considered artificial general intelligence (AGI) achievable by 2029. Add to that Elon Musk’s prediction of 2025 as the year “it” would arrive. It took less than five years from the infamous draft report for ChatGPT to be launched, followed by Anthropic’s Claude, Meta’s LLaMA, Google DeepMind’s Gemini, and xAI’s Grok, among other groundbreaking applications. Add to that Boston Dynamics’ Atlas, Tesla’s robotaxi, and Optimus, and so on.

However, what worries me the most is the experts’ ignorance regarding corporate legal personality—particularly seeing that there were actual legal academics among the signatories. You see, it is elemental legal history that as Ancient Rome expanded, it faced tremendous challenges to build the empire’s breathtaking aqueducts, collect taxes, supply the military, and import gran—all of them public works. As a consequence, they created the societas publicanorum, a legal vehicle with legal personality and limited liability specially designed for wealthy Roman citizens to safely pool their capital and undertake those high-risk/high-reward activities. It was a notoriously pragmatic decision by the Romans, based on purely instrumental motivations—and it worked wonders. There are no records of Roman authorities creating committees to feverishly discuss what the societas publicanorum could mean for the rights and privileges of citizens. Senators didn’t sign a papyrus throwing their toys out of the pram—unlike the European Union, which effectively threw its arms in the air in 2017, just because AI did not exist then the way it does today. *sigh*

You see, if AIs would require “incorporation,” then it would be simpler for a harmed party to pursue accountability in a court of law. AI labs today aren’t a single entity. They are a labyrinthine web of providers tied up through a fiesta of joint ventures, partnerships, licensing agreements, cross-border arrangements, and a myriad of links that fatten the wallets of law firms all over the world. Imagine a plaintiff juggling with swords set on fire in order to find that “someone” who is responsible! Instead, the legal action would go straight and solely against the e-person and, if its resources (yes: its patrimony) are not enough to redress the harm, the mechanism of veil piercing would allow to reach the corporation behind the e-person, leaving it to them to scramble in search of the culpable party among that web of companies, joint ventures, and licensing and collaboration agreements just mentioned.

Another horror story is the legal jungle around AI that is taking place right now. Take Europe as an example. Aside from the European Union’s AI Act Frankenstein, there has been an AI Liability Directive (AILD) in the works for already three years. The goal of the AILD is to harmonize the legal systems of its 27 members regarding liability. Such a process would not only take decades but is also doomed to fail. Legal systems are downstream from culture, history, and all elements that make a country unique. As a tiny but very telling example, once the AILD is transposed—whenever that happens during the next century—a harmed citizen of one country would differ from the same case in another country. Add to that the learning curve by judges, lawyers, clerks, and everyone else involved in the judicial system. Instead, allowing each jurisdiction to “simply” plug the e-person into the corporate model would link it automatically to all the bodies of law ancillary to corporate law, e.g., procedural, environmental, intellectual property, bankruptcy, criminal, civil, data privacy, and everything in between.

There are a lot of question marks around this idea. The corporation would serve as a loose blueprint, yet it requires careful, sui generis tailoring. However, that pales in comparison to the years—almost decades—that would take to glue together the hundreds of acts, directives, guidelines, white papers, codes of conduct, and regulations applicable to AI. Moreover, the citizenry would have a weapon that is extremely similar to the one they already know when harmed by the actions of a corporation. And, as a cherry on top, doing so would bypass the need to insert the suppository of a one-world government to deal with the risks of AI—while not precluding international cooperation among sovereign states as it already happens in areas such as corporate governance, anti-money laundering, the fight against tax avoidance, and so on and so forth.

Just think about it, okay? Among all the loquacious ideas floating around, this is by far the quickest to implement and the most elegant. AI won’t wait for politicians to do their job. It’s quarter to twelve, bucko. Better hurry up.


Javier Reyes, PhD, is a university lecturer, lawyer, and full-time pessimist. He lives in Helsinki. His upcoming book The Monkey in the Machine: Is It Ethical to Grant Legal Personality to AGI? (Ethics Press) will be released as soon as he stops slacking and finishes it.

Previous
Previous

The Jason Kilborn Case Perfectly Illustrates Why DEI Was a Bad Idea

Next
Next

Zoroastrianism: The “Missing Link” of Major World Religions