Scholar ponders a future when artificial intelligence will have rights

Matthew Liao

By DAVID DUPONT

BG Independent News

A philosophical talk that ventured to the fringes of science fiction wound its way to the hot button real world issue of abortion.

Matthew Liao, director of the Center for Bioethics at New York University, was at Bowling Green State University recently to discuss The Moral Status and Rights of Artificial Intelligence. It was the first event sponsored by the Institute for Ethics and Public Philosophy.

Liao posited conditions in which robots or other artificial intelligent entities could have greater moral status than their creators.

But as the question and answer session after the talk wore on, the issue of abortion came up. Liao argued: “The idea here is that if the entity has some sort of physical code … that generates moral agency … then that’s sufficient reason to think that it can be a rights holder.”

He was questioned whether that didn’t give moral status to a fetus. Liao responded that the fundamental right of bodily integrity would trump that just as someone wouldn’t be expected to give up a limb in order to save someone. That may be admirable, but not morally required. 

Some reasons for having an abortion, he said, may be specious, but having a fundamental right, as he defined the right to bodily integrity, also entails sometimes misusing that right.

So how then could robots come to have greater rights than humans?

It wouldn’t be, Liao said, because they were more intelligent, rational or empathetic. It would be because they had some as yet unidentified quality.

That’s quite a leap from the time when computers were developed that could beat  the greatest Jeopardy champions at their game in 2011 or the masters in the ancient game of Go. And then a new generation of machines arrived that could beat those earlier machines. That new generation of computers learned not from human behavior but by self-reenforced learning.

These are not just academic exercises.

“Different companies are trying to learn about human emotions to get robots to be more human like,”Liao said. This is important as countries, especially Japan, face shortages of caregivers for the elderly.

“Some of the elderly become really attached to these robots,” Liao said. “That’s going to become more an issue as robots become better at what they do.”

Some entities have greater moral status than others moving up from rocks to plants to animals to animals with moral agency.

That capability for moral agency is encoded into people. Could that capability be replicated in artificial intelligence?

That could happen in three ways.

One way would be to upload the human brain. Scientists could image it, create a simulation and then upload it to software. In Europe there’s a $1 billion project to create computational models of the brain, he said.

Efforts are also underway in the United States. If that can be achieved, he said, since the physical brain has moral agency then the artificial simulation would have the same moral agency.

That, Liao said,”may be possible.”

Another approach is to make a gradual substitution The U.S. government has been trying to replace neurons as a way to repair the damaged brains of war veterans, and those suffering from Post-Traumatic Stress Syndrome.

Doing this gradually, one neuron at a time, nothing would seem to change. “This would be an artificial intelligence with the physical basis for moral agency,” Liao said.

The third option is through coding. Artificial intelligence is now coded for distinctive tasks, such as playing chess, but it cannot also make coffee, he said.  Could it be programmed to act more generally to address ethical and moral issues?

“The problem I see is that those rules as too specific. You would still be telling the computer what to do,” Liao said. If the entity is just following the rules it has no moral agency.

If artificial intelligence did get rights what would those look like? The rights spelled out in the Universal declaration of human rights, such as freedom of thought, speech, and thought, are aimed at ensuring humans what they need to flourish.

What do computers need to flourish? Maybe, Liao said, it would be to control their subjective rate of time.

Probably it would not be the right to procreate because they could easily reproduce millions of copies of themselves.

While this all seems very far of, Liao noted, scientists have consistently underestimated the speed at which artificial intelligence technology has developed.