How do we define the humane treatment of Artificial Intelligence? Does Artificial Intelligence deserve the same rights as humans? Or even animals? How do we eliminate AI bias? What are the core moral values of AI to be?
These are some of the ethical questions that are being tackled by John Basl, an assistant professor of philosophy at Northeastern University. According to Basl, we’re closer than ever to creating machines that are approximately as cognitively sophisticated as mice or dogs. He argues that these machines deserve the same ethical protections we give to animals involved in research.
“The nightmare scenario is that we create a machine mind and, without knowing, do something to it that’s painful,” Basl says.
“We create a conscious being and then cause it to suffer.”
Basl and his colleague at the University of California, Riverside, propose the creation of oversight committees to carefully evaluate research involving artificial intelligence. The committees would be composed of cognitive scientists, artificial intelligence designers, philosophers, and ethicists.
But a philosophical question lies at the heart of all this:
How will we know when we’ve created a machine capable of experiencing joy and suffering, especially if that machine can’t communicate those feelings to us?
The liberal view is that all that’s required for consciousness to exist is well-organized information processing.
The more conservative view of consciousness requires robots to have specific biological features such as a brain similar to that of a mammal.
At this point, Basl says, it’s not clear which view might prove to be correct, or whether there’s another way to define consciousness that we haven’t considered yet. But, if we use the more liberal definition of consciousness, scientists might soon be able to create intelligent machines that feel pain and suffering and deserve ethical protections.