Oct 27 2023

AI As Legal Entities

Should an artificial intelligence (AI) be treated like a legal “subject” or agent? That is the question discussed in a new paper by legal scholars. They recognize that this question is a bit ahead of the technology, but argue that we should work out the legal ramifications before it’s absolutely necessary. They also argue – it might become necessary sooner than we think.

One of their primary arguments is that it is technically possible for this to happen today. In the US a corporation can be considered a legal agent, or “artificial persons”, within the legal system. Corporations can have rights, because corporations are composed of people exerting their collective will. But, in some states it is not explicitly required that a corporation be headed by a human. You could, theoretically, run a corporation entirely by an AI. That AI would then have the legal rights of an artificial person, just like any other corporation. At least that’s the idea – one that can use discussion and perhaps require new legislation to deal with.

This legal conundrum, they argue, will only get greater as AI advances. We don’t even need to fully resolve the issue of narrow AI vs general AI for this to be a problem. An AI does not have to be truly sentient to behave in such a way that it creates both legal and ethical implications. They argue:

Rather than attempt to ban development of powerful AI, wrapping of AI in legal form could reduce undesired AI behavior by defining targets for legal action and by providing a research agenda to improve AI governance, by embedding law into AI agents, and by training AI compliance agents.

Basically we need a well thought-out legal framework to deal with increasingly sophisticated and powerful AIs, to make sure they can be properly controlled and regulated. It’s hard to argue with that.

For now the legal framework seems to be – the AI is treated like any other technology, if it fails due to corporate incompetence, neglect, or malfeasance, the company is legally responsible. We see this with self-driving cars. If a human is driving, they are responsible for any accidents. But if the car is driving itself, the manufacturer is responsible for any accidents. The technology is responsible for what the technology does, through the corporation that made the technology.

I think this framework will get us pretty far, actually. AI is really just a tool. If someone uses a hammer to kill someone, they are responsible, not the hammer manufacturer. But if the hammer, under normal use conditions, explodes into metal shards and damages the user, the manufacturer is very likely responsible. There are always edge cases that the legal system has to deal with. What if the manufacturer specifically designed the hammer to be really good as a weapon for killing people, in order to expand their market into the lucrative psychopath demographic? That’s a silly example, but it’s legally not far off from cases contemplated against gun manufacturers (it was even a case in Law & Order).

Similarly, there has been a lot of discussion about self-driving cars. On one level culpability is about how well the technology works overall. As long as it works within acceptable parameters, and the manufacturers give the users sufficient informed consent, they won’t be necessarily liable for inevitable accidents. But what about the decision algorithm they bake into their software? What “choices” will the self-driving car make, with what ethical implications, and can the manufacturer be liable for parameters that are considered ethically dubious? Again – these edge cases will have to be sorted out legally, and having a framework to work in would be helpful.

The legal challenges become increasingly difficult, however, as AI becomes ever more sophisticated. At some point AIs will be acting with complex decision-making at the human level (and again, we can bypass the question of whether or not it is truly sentient). It many respects they may surpass humans, or have qualities that are highly desired, such as cold objectivity. It is not hard to imagine a board of directors essentially appointing an AI built for the purpose as CEO of a corporation, allowing the AI to make top-level business decisions about the company. What if (just to use a blatant example for the purpose of illustration) the AI decides that company efficiency and profit can be maximized if it replaces all the female employees for an all-male workforce? That obviously won’t fly, but who is liable? The AI, the board who appointed the AI, the corporation as a whole? Can we require AI CEO software to abide by certain ethical and social justice standards? Again, the answer is obvious (yes), but that is exactly what the authors are saying. We need explicit laws for this.

Beyond obvious cases like this, there will be nuanced edge cases. It is not hard to imagine a near future where the alleged objectivity and predictive power of AI is put in charge of a long list of important decisions – college admissions, redistricting, the prime interest rate, military targets, investment funds, choice of medication prescriptions, and many others. At first these will be used as expert systems – providing one piece of information (an AI recommendation) for human experts to make the ultimate decision, and take the ultimate responsibility. Ideally, perhaps, we will stay indefinitely in that mode. AI makes recommendations, but there is always a human in the loop.

But it will be tempting, in many contexts, to remove the human from the loop. That is what is happening with driverless cars – no human behind the wheel. What if objective evidence shows that outcomes are statistically better with the AI recommendation than when a human second-guesses the AI? The speed and efficiency of the AI might also be desirable, with human experts just slowing things down. When trying to get an edge in the marketplace, microseconds matter. Already there are trading algorithms that act without a human in the loop.

How tempting will this be in warfare, where seconds also matter and are life and death? Will we trust AI automated systems to make these life-or-death decisions, to decide acceptable civilian casualties, or acceptable military losses in order to achieve a goal?

We can keep going. Eventually there may be no significant aspect of modern civilization and life that is not potentially controlled by an AI, who will be faster, better, cheaper than the human version. In this arguably highly likely scenario, AI will not conquer humanity, we will happily surrender for the convenience. We also have to consider the power of AI in the hands of dictators and authoritarian governments (perhaps the truly scariest scenario).

It makes sense to start thinking about the legal, ethical, and technological of advancing AI, and to start building in some safeguards. I am not really worried about a Matrix or Battlestar Galactica scenario, and the Asimov laws of robotics will not help us. We should worry about the power of even narrow AI to slowly take over running our society, and who has the real power in this situation. Do we want to leave our future in the hands of a few technology companies (more than we already have)? What balance of rights and powers do we want between corporations and individuals? How do we prevent AI tools from being used by authoritarians to eventually take over the world?

There is a certain balance between democracy and authoritarianism, and I don’t think history provides a clear answer to which has the evolutionary advantage. Democracy was doing really well for a while, but now is retreating before a significant advance of authoritarianism. I am not willing to assume that democracy will just naturally win out long term. It seems that social media (which was supposed to be a huge win for democracy) has inadvertently put its fat thumb on the scale for authoritarianism simply by making it really easy to radicalize large portions of the population. Will AI do the same? Can we recover from that? Can we prevent it with the proper legal framework? Good questions.

No responses yet