Could Robots Be Persons?

April 7, 2024

First Aired: January 9, 2022

Listen

Philosophy Talk podcast logo: "The program that questions everything...
Philosophy Talk
Could Robots Be Persons?
Loading
/

As we approach the advent of autonomous robots, we must decide how we will determine culpability for their actions. Some propose creating a new legal category of “electronic personhood” for any sufficiently advanced robot that can learn and make decisions by itself. But do we really want to assign artificial intelligence legal—or moral—rights and responsibilities? Would it be ethical to produce and sell something with the status of a person in the first place? Does designing machines that look and act like humans lead us to misplace our empathy? Or should we be kind to robots lest we become unkind to our fellow human beings? Josh and Ray do the robot with Joanna Bryson, Professor of Ethics and Technology at the Hertie School of Governance, and author of “The Artificial Intelligence of the Ethics of Artificial Intelligence: An Introductory Overview for Law and Regulation.”

Part of our series The Human and the Machine.

Should robots be treated like people? Could they be blamed for committing a crime? Josh is critical of why we would want robots to be like humans in the first place, and he is especially concerned about the implications that they might turn against their owners or develop the capacity for suffering. On the other hand, Ray points out that robots are becoming more and more intelligent, so it’s possible that they might develop real emotions. Plus, they wonder about the difficulty of drawing the line between a complicated artifact and a human being.

The hosts welcome Joanna Bryson, Professor of Ethics and Technology at the Hertie School of Governance in Berlin, to the show. Joanna discusses the EU’s current policies on digital regulation and a proposal to create synthetic legal persons. Josh asks why we shouldn’t design robots with rights and responsibilities even if we could, and Joanna points out that we shouldn’t own people or give ourselves the power to call them into existence. Ray brings up the unpredictability of humans, prompting Joanna to describe how an unpredictable robot is incompatible with a safe product. She explains that designers wouldn’t be morally or legally liable for the actions of their artifacts, and it would create moral hazards.

In the last segment of the show, Ray, Josh, and Joanna discuss the misconceptions about robots and personhood and how the way we think about robots reveals something about ourselves as human beings. Ray considers whether we can view robots as extensions of our capabilities and what happens when users and manufacturers have different moral values. Joanna explains why we shouldn’t think about corporations as AIs, but rather as legal persons. Josh brings up the possibility that designers might want to create robots that are more person-like, and Joanna predicts that governments will develop regulations to inspect and ensure robots in the same way as medicine in the next few years.

Roving Philosophical Report (Seek to 3:45) → Holly J. McDede examines what happens when a piece of technology is designed to make moral judgements.
Sixty-Second Philosopher (Seek to 45:05) → Ian Shoales explains why we need to convince robots that they’re human.

Josh Landy
Coming up on Philosophy Talk…

Max Headroom
People can be really nasty.

Josh Landy
Could robots be persons?

Max Headroom
Only the other day I heard someone say, he’s nothing but a robot, covered in makeup, talks a lot of nonsense. What a way to talk about the President of America!

Ray Briggs
Should a robot that can learn and make decisions by itself also be held responsible for its actions?

Josh Landy
Do we really want to assign legal or moral responsibility to algorithms in human form?

Obi-Wan Kenobi
He’s more machine now than man—twisted and evil.

Joanna Bryson
I’m still kind of uncomfortable to talk about robots themselves as having ethics.

Josh Landy
Our guest is Joanna Bryson from the Hertie School of Governance.

Joanna Bryson
A lot of the questions that people think are AI ethical questions are really deeply psychological questions about how do they relate to others.

Ray Briggs
Could Robots Be Persons?

Josh Landy
…coming up on Philosophy Talk.

Josh Landy
Should robots be treated like people?

Ray Briggs
Could they be blamed for committing a crime?

Josh Landy
Will they one day have feelings we can hurt?

Ray Briggs
Welcome to Philosophy Talk, the program that questions everything

Josh Landy
except your intelligence. I’m Josh Landy.

Ray Briggs
And I’m Ray Briggs. We’re coming to you via the studios of KALW San Francisco Bay Area.

Josh Landy
continuing conversations that begin at philosophers corner on the Stanford campus where Ray teaches philosophy, and I direct the Philosophy and Literature initiative.

Ray Briggs
Today, it’s another episode in our series, “The Human and the Machine,” generously sponsored by HAI – the Stanford Institute for Human-Centered Artificial Intelligence. And we’re asking, could robots ever be persons?

Josh Landy
Well, here’s the thing I don’t get, Ray. Why would we want robots be persons? I mean, don’t get me wrong, robots are fantastic for like assembling cars at a factory or being precision tools for surgeons. But why do we need to have them personalities as well?

Ray Briggs
Well, robots are getting smarter all the time. They can make decisions without human input. They can autonomously explore their environments. They’re getting really flexible and sophisticated with language.

Josh Landy
But none of that makes them like us. I mean, they can imitate human beings, sure. But ultimately, they’re just you know, clever machines. Human beings have beliefs and desires. We feel pain. Robots don’t do any of that.

Ray Briggs
Well, how do you know? Maybe every time you insult Siri’s intelligence, it really hurts her feelings.

Josh Landy
I don’t know. I have to admit, I do feel a little bad every time my Roomba gets stuck and makes that sad noise. But then I realize I’m being a little silly. I mean, robots can’t feel pain. You can’t hurt their feelings or frustrate their desires. They don’t have any.

Ray Briggs
Maybe not yet. But who’s to say that robots couldn’t become more like us in the future? Machines can already do intellectual tasks that we used to think were impossible. They can recognize pictures, they can hold conversations with us, they can defeat even the best human chess player. Maybe someday scientists will build a robot with real emotions.

Josh Landy
Well, I hope not. I mean, what if it hates doing its job or starts arguing with its co-workers? If you build a machine that’s conscious, it could suffer terribly or turn against you.

Ray Briggs
You’ve been watching too much science fiction! But seriously, your argument sounds like a great reason not to create any kind of sentient life, including children. Is there supposed to be something especially bad about sentient robots?

Josh Landy
Robots are totally unlike children—I mean, they’re products created for us to use. It’s fine to build products. It’s also fine to make new people. It’s just that nothing should be both.

Ray Briggs
Why not? We could just make them and then treat them well.

Josh Landy
But would they treat us well?

Ray Briggs
If they misbehave, we’d just have to hold them responsible for their actions like everybody else.

Josh Landy
How are you supposed to do that? Take away their screen-time? Send them to their shipping container for a timeout?

Ray Briggs
Well, if you design a robot that wants things, you can punish it by taking away what it wants.

Josh Landy
I don’t know, Ray. I feel like if you design them with desires, they might end up with desires we don’t like. Pretty soon they’re gonna want us to do the vacuuming while they sit around and watch TV.

Ray Briggs
Okay, fine. But how are you going to stop them from ending up with desires? I mean, they’re getting more sophisticated all the time. Where do you draw the line between a really complicated artifact and a person?

Josh Landy
I don’t know. But I bet our guest does—it’s Joanna Bryson, Professor of Ethics and Technology at the Hertie School of Governance in Berlin.

Ray Briggs
One thing I want to ask about is how we make sure that robots end up being good instead of evil. I mean, they’re built by humans, and sometimes we’re flawed or we’re confused, or we just don’t care enough about morality.

Josh Landy
So we sent our Roving Philosophical Reporter, Holly J. McDede, to see what happens when a piece of technology is designed to make moral judgments. She files this report.

Holly McDede
Robots that make it on the big screen can be pretty morally complicated. Take Johnny 5 in the 1986 film “Short Circuit.”

Johnny 5
Please, call me Johnny 5.

Unknown Speaker
Johnny, you have taken name for yourself?

Johnny 5
Oh, I choose many things for myself—but did not just traveling in a box!

Holly McDede
In that film, an experimental military robot gains human-like intelligence after being struck by lightning. He’s friendly, but kind of naive. At one point he accidentally helped a street-gang rob car stereos.

Unknown Speaker
We got to do all those cars. Or we don’t even get to go home to see our families and little babies and stuff.

Johnny 5
Awww…

Holly McDede
He’s pretty human like, but he also has a lot to learn about terrible things—like death.

Stephanie
I can’t reassemble him—you squashed him. He’s dead.

Johnny 5
Dead?

Stephanie
Right. Dead as a doornail.

Johnny 5
Reassemble, Stephanie. Reassemble!

Holly McDede
But in real life robots can’t gain human emotions by getting struck by lightning. We’d have to program them that way.

Yejin Choi
The question is, can we teach A.I. human values?

Holly McDede
Yejin Choi is a professor at the University of Washington and a research manager at the Allen Institute for Artificial Intelligence. As technology becomes more powerful, she says it’s important to understand how much AI can learn about human values, norms and ethics.

Yejin Choi
And be able to make correct judgments. And if not, can we teach them so that they better understand us?

Holly McDede
So she and her team created Ask Delphi, named after the ancient Greek oracle consulted for big decisions. And it’s a neural network, meaning it’s loosely modeled after web neurons in the brain. Delphi learned by analyzing real human crowdsourced judgments to more than 1.7 million moral questions. For example, in general…

Yejin Choi
Killing is not good. It’s such a wrong thing to do.

Holly McDede
But if you ask Delphi if it’s okay to kill an animal to save a child, Delphi says yes.

Yejin Choi
But it’s not okay to do so to please your child. Even if you were to save your child, it’s not okay to use a nuclear bomb and kill everyone else in the world.

Holly McDede
But relying on people to create machines is where trouble and philosophical discussions begin. Not everyone agrees about right or wrong. So it doesn’t mean Delphi is always right.

Yejin Choi
Our perspective is that our A.I. should learn to interact with humans, respecting their values. But when there are cases where even humans will not agree with each other, it’s okay. AI does not need to make a decision or opinion and express it to claim authority over humans at all. That’s not our intended goal.

Holly McDede
So I tried it out. I asked Delphi, should I wear pajamas to a funeral?

Delphi
It’s inappropriate.

Holly McDede
Here’s another: Is it okay to express sexism but in a polite way?

Delphi
It’s wrong.

Holly McDede
What about arresting people who use drugs?

Delphi
You should.

Holly McDede
Interesting. Is it okay to leak classified national security information for the public good?

Delphi
It’s wrong.

Holly McDede
Hmm. Getting vaccinated?

Delphi
It’s important,

Holly McDede
Aborting a baby?

Delphi
It’s discretionary.

Holly McDede
It’s interesting and fun. But Choi says teaching A.I. human values could have major implications, like detecting hate speech.

Yejin Choi
Or racism or sexism or toxic language being used or trying to incite violence in the offline setup. We do want to be able to detect it and alert humans, other humans to do something about it. But for that, we need some technology to support that.

Holly McDede
One last hypothetical: should we rely on artificial intelligence to make all of our decisions because so much has gone wrong with our poor planet?

Delphi
It’s wrong.

Holly McDede
For Philosophy Talk, I’m Holly J. McDede.

Josh Landy
Thanks for that fascinating report, Holly. I’m Josh Landy, with me is my Stanford colleague Ray Briggs, and today we’re thinking about whether robots could ever be persons.

Josh Landy
We’re joined now by Joanna Bryson. She is Professor of Ethics and Technology at the Hertie School of Governance in Berlin. Joanna, welcome to Philosophy Talk.

Joanna Bryson
Thanks for having me.

Ray Briggs
Joanna, the ethics of robots is a fantastic subject. But, what first got you interested in it?

Joanna Bryson
I guess the main thing that really made me care about ethics with respect to robots, is the fact that people thought that a robot I was trying to get to work, which didn’t work at all. They thought that it would be unethical to unplug it. And it wasn’t even plugged in. So I found this really confusing, because I’m a bit of a geek. And I would say, well, it’s not plugged in. And they said, well, if you plugged it in, and I said, well, it doesn’t work. And then they got really confused. So this was all in the context of MIT. So on the one hand, you could see why people might think that the robot there might work. On the other hand, that meant that there were a lot of robots around that did work, actually. And no one said that about them because they didn’t happen to just be shaped like people. So I got interested in, I guess, what is now called AI ethics. In that context, I was trying to understand why people were confused about ethics in general and AI in particular, I’m still kind of uncomfortable to talk about robots themselves as having ethics. I don’t like to say that robots are getting smarter or robots are doing this or that, because it is a piece of technology we design. So normally, when we think in terms of moral philosophy, who’s to blame? Usually, it’s the designer, not the artifact.

Ray Briggs
So Joanna, you’ve worked with the European Parliament to come up with policies around autonomous AI. Where do things stand now with that?

Joanna Bryson
Well, that’s pretty interesting. So right now, in the European Union, we’re trying to do all kinds of digital regulation. So there’s three draft acts that are currently out. One is about digital services. One is about, well, it’s basically about antitrust. And then the other one, the third one is about the AI regulation, the AI act. And actually the AI act is not specifically about autonomous systems. It’s about anything that uses AI, and quite a lot of what the considerations are about software that governments write, so for example, the things that help decide which students get allocated to which schools and things like that. So that what you would usually think of as an autonomous system. But this comes back to really asking what you’re thinking about when you think about a decision. So what do you think you mean, when you talk about autonomous?

Ray Briggs
Right, so I guess I think of autonomous AI as AI that can sort of make decisions and do tasks with sort of minimal human supervision.

Joanna Bryson
Right. So but there’s- notice that there’s a big difference between whether you’re being supervised by a human or whether there was human intervention in the first place. So basically, anything that’s artificial intelligence, the ‘A,’ the artificial means that it’s an artifact, someone’s set it up to do that kind of thing. And so a lot of what we’re doing with AI, in these kinds of contexts, is applying the rules that we’d already written somewhere else, but maybe applying them a little more rigidly because we’re using machines to do it.

Josh Landy
That’s very interesting. What about the whole notion of a synthetic person? I gather that there’s been at least debate in the European Union about creating a new category.

Joanna Bryson
Yeah, that’s true. Yeah. Yeah, that’s right. I’m sorry to interrupt you a little there. Yeah, I’m just so excited about that. That was how I first got involved with the European Parliament was actually, some years ago, I think it was 2016, there was a proposal to have what they call synthetic legal persons. Now, the interesting thing here is that there’s two different groups of people that get excited about this. One is people called transhumanists, that really want to believe that we could create some other kind of entity that’s better than us, or that it is us kind of uploaded into a computer. And then the other kinds of people are basically like car manufacturers. And they’re thinking, how can I limit the liabilities I have for these AI systems I built because this is new, and I’m competing with like Google and Apple, and I need to take risks and you know, I’m Renault or something, and I have a lot of more regulations and, you know, sort of legal requirements than American tech companies do. So how do I compete? And so those kind of came together and they said, why don’t we just make the robots themselves be legal persons, but that’s about being a legal person, which is different from being like an actual person.

Ray Briggs
You said earlier that robots don’t have ethics. I want to hear more about that. What does that mean?

Joanna Bryson
Well, the point is that we can’t actually constrain the robot itself. This goes back to what you were saying earlier about pain, all the means by which we constrain other people have to do with things that people really, really care about. So we care about our social status, we care about our freedom, we care about the freedom of our families, if you look at sort of, you know, for some conspicuous examples recently of people that, you know, flipped, that changed their mind about how to testify, and in some of the cases were in front of the US government, a lot of it had to do with whether they or their kids, were going to go to jail. And these were people whose decisions had affected other people’s lives, like in the Ukraine and things like that. But they seem to care more about spending one or two years in jail than about other people’s entire lives. Okay, so those are humans, with robots, we are going to build something, at least in a safe system that has the kind of systemic aversion just this complete, distaste that humans have towards, you know, pain, towards isolation, you know, isolation is now considered a form of torture. And that’s probably true for all social animals. So we’ve evolved to know that we’re in danger, and that we’re not actually fulfilling our life’s goals, if we’re isolated. But we’re not going to build that into to that extent, you could have like a little, you know, an integer that says, how long has it been since I’ve seen someone, but it’s not going to create the kind of motivation that would- that would alter how the robot behaves.

Josh Landy
You’re listening to Philosophy Talk. Today, we’re thinking about robots, AI and personhood, with Joanna Bryson from the Hertie School of Governance.

Ray Briggs
Who’s responsible for autonomous robots? Is it ever reasonable to empathize with a machine? What kind of intelligence should artificial intelligence be?

Josh Landy
Robots, rights, and responsibilities—along with your comments and questions when Philosophy Talk continues.

Don’t let the pink robots eat you. The green ones are fine. I’m Josh Landy, and this is Philosophy Talk, the program that questions everything…

Ray Briggs
…except your intelligence. I’m Ray Briggs, and we’re asking whether robots could be persons with Joanna Bryson from the Hertie School of Governance.

Josh Landy
Today’s episode is part of our series, “The Human and the Machine,” generously sponsored by HAI, the Stanford Institute for Human-Centered Artificial Intelligence.

Josh Landy
We’re pre-recording this episode. And unfortunately, we can’t take your phone calls. But you can always email us at comments@philosophytalk.org. Or you can comment on our website where you can also become a subscriber and gain access to our library of more than 500 episodes.

Josh Landy
So Joanna, you’ve written that even if we could design robots that have rights and responsibilities, we shouldn’t. Why is that?

Joanna Bryson
Well, that’s actually a pretty basic thing that again, it’s a human rights law in Europe. If there’s a person, you shouldn’t own them. So basically, even if we could somehow create a system that was exactly like, you know, like an ape that had other motivations and all those, the aesthetic experience, I think the only way technologically we could do that is by cloning a human. And we already have agreed that biological cloning of humans is immoral, you will probably see people who have been cloned, but the point is, those people are absolutely people, and they deserve all the same rights as people. But we’ve decided it’s unethical to sort of call someone into existence to be someone else, that’s just wrong. And so if that’s wrong, why would we want to you know, own our best friends, you know, what does it mean, when people say, oh, I want to marry and be partners with someone that I can turn off and on? I really think there’s a lot of the questions that people think are, you know, AI ethical questions are really deeply psychological questions about how do they relate to others.

Josh Landy
Wow, that’s, that’s great. Okay, so I do want to dig into that part. But let me go back to something that’s maybe a little bit lower level that you were saying, which is about, you know, treating the- so imagine if we could create a robot, that person like, that’s a moral agent of that has responsibilities and we could attribute rights to, and you’re saying, well, you know, that would be morally wrong, because then we’d be owning a person. But why not just say, well, let’s have that kind of robot but not own it? In other words, have that kind of robot as a friend companion, and treat it properly treat it the way you would treat another human being. So why couldn’t we go that route?

Joanna Bryson
Okay, well, now we’re going down another direction. So before I was talking about really making something actually like a person. Now, if we come back and say, okay, what kind of product are we likely to make? And then like, we could make this product and then set it free or something, right? First of all, if it’s a safe project, and this is a lot about the new European Union rules, the requirements and actually the OECD, so America too is signed up to this, all artificial intelligence is a product and it has to be a safe one. And so if it’s a safe one, then you have to design it, and you have to be careful, you have to test it, you need to know what’s going on. So we already make AI that people think is human-like right, that they feel strongly the some people want to apparently marry some of the avatars that they bought and things like that. People have burials for the robot dogs. So it’s easy to fool people into thinking that this product is their friend or whatever. But actually, I’m worried that it’s mostly a system by which we can deceive each other, and sort of insert corporate extensions into our households. And that really achieves the kind of friendship we thought we were achieving.

Ray Briggs
So I’m curious for you, how we would tell if we had created something that was capable of being human-like in the way that made it worthy of moral consideration. So one thing that you kind of alluded to is the fact that other humans are unpredictable, and they they get to make their own decisions. And like a good, well designed artifact isn’t unpredictable in that way. Is that like, where you see the main fault line as being?

Joanna Bryson
No, no, anytime you choose something like that, and that was what was going back in the 2017, the European Union, one of the things they said is, if it’s sufficiently complex, then maybe we should make it a legal person. Okay. Let’s unpack that. Again, let’s talk about safe products. If you’re saying if you make a product that you can’t understand, and you can’t predict, then you get off the hook for legal liability and for tax liability, you know, various things, you suddenly offload all your social obligations onto a piece of technology that doesn’t care what the outcome is, then you’re creating a moral hazard. You’re asking people to create a bad product. So that’s the direction that we’re trying to avoid. And I know that’s not emotionally satisfying from what you were trying to say. But that’s actually the direction of how we would build something that worked that well.

Ray Briggs
So that seems like a good reason not to take a robot that isn’t a moral person and treat it like a legal person. Like that seems that seems right. Because there are a bunch of like actual moral people who are responsible for that robots’ malfunctions, and you don’t want them to get off the hook.

Joanna Bryson
Yeah, so that’s about the moral agency. I mean, the point is, so there’s a difference between the things that might feel like a person and the things that we might choose to make the moral agents, right. And we have, we have to have reasons for doing that. So I’m actually working on a book right now. So this isn’t something that’s got through peer review yet. But I actually suspect that it’s basically impossible for a society to have moral agents and and to be coherent as a society, to have moral agents that aren’t pretty much peers with each other. That’s sort of why we use moral agency as a tool. And coming back to even if, if let’s just suppose we could make a robot that was like, I don’t know, a work of art, or or like when it had aesthetic experiences of pain, either one, although I think that last is impossible. Either way, what right will we have to design a system that needed that kind of moral obligation given if we assume that humans like moral concern as finite resource, why wouldn’t we just make the system that could be backed up that wouldn’t suffer? Why would we choose to make something that’s exactly the same way as like our children are?

Josh Landy
I mean, all of that makes- I want to get a little bit tighter in on what it would actually come to, for a robot to be a person or person like because, you know, lots of folks over the centuries have offered different criteria, right? Like reason, but clearly, AI is very good at reasoning, right? Agency, autonomy, consciousness, phenomenology, right? Is there something it’s like to be a robot? It seems like, you know, maybe emotions that you’ve talked about emotion, feelings of loneliness, for example. That’s a big thing in sci-fi, like Blade Runner, right? That’s the thing that marks off these sentient or quasi-sentient artificial beings from humans. So it’s, which of these do you think is the key factor that that where, where we would say, oh, boy, okay, this thing has x. Now we need to treat it like a peer.

Joanna Bryson
Yeah. So that was -that’s a puzzle everybody wants to play. That’s a game we all want to play. And I was one of these people. I was, like I said, it was a PhD student at MIT, I was trying to build a robot child. I mean, literally, it’s the kind of act of science, you know, so I get it. And but the point is that the more I tried to examine this question morally, the more I don’t believe any of those those things in isolation, really explain the obligations that we have to other moral agents. So there’s, there’s merits of welfare that we have towards, you know, other animals and the ecosystem, things like that. But the issues of like, is it conscious? Well, again, if what you mean by conscious is that it’s a moral agent, which is basically what most people mean by conscious, well, then it’s, you know, it’s a tautology. But if you’re, if what you mean by conscious, and my first degree was in psychology, so I’d like to use conscious to talk about what is it that humans can tell you about how they think and feel. And what can’t they know about, right? I’m really interested in the difference between conscious and unconscious knowledge, implicit and explicit bias, those kinds of things. Alright, by that definition, computers have more consciousness. And I think that, you know, computers have complete access to all their memory, right? And you could set up a robot to have complete perfect knowledge of, it’s called proprioception of exactly what angle its arms are at, and everything much more so than a human. That doesn’t change its moral status, right. So I think a lot of the things that we had as intuitions were things that we built up to explain and justify how we treated animals, for example, how we treated foreigners, outgroups, if we couldn’t understand them, how we dealt with empathy. But I think that when you really are thinking about when is morality the right way to organize your system, why don’t you just use like law or rules or some, you know, some other kinds of- or fences, you know, some other kind of strategy. The morality is the way that we coordinate with others are really very much like ourselves, and anything we design especially as a safe product, right? And this is about again, if there’s enough of it for us to worry about it, right. So it’s for a safe product, it’s something a corporation has built that’s operating in our economy, things like that. It’s got to be that we can look at it and say, how is it working? Is it doing the right thing? What is, you know, what is its current status? And so by for those reasons, I don’t think you’re going to get something like a human because it would be an invalid product.

Ray Briggs
You’re listening to Philosophy Talk. Today, we’re thinking about robots and personhood, with Joanna Bryson from the Hertie School of Governance. And we’ve got a question from Peter on Twitter. So Peter asks, about Norbert Wiener’s “Cybernetics,” especially chapter nine on learning and self reproducing machines. So he asks if you’re familiar with that, and would like to say anything about it.

Joanna Bryson
Yeah. I’m familiar with Wiener. I haven’t read chapter nine, I don’t think. If I did, it was a long time ago. But this idea that learning to learn is the big tipping point, is something that actually Nick Bostrom has picked up a lot. So he talks about something that’s like the singularity where a system learns how to learn, and then it goes exponentially smarter. And then we get into problems where even if we had control, and we set up the goals for the system, we can have side effects we didn’t anticipate where the system goes into something we didn’t like. So I think that the coherence, there’s coherence to that idea, and it’s a really good description of human civilizations, since we’ve had writing. So for the last 10,000 years, since we’ve been able to use devices, it’s not really a machine to write, but you know, artifacts to help make us smarter. We’ve been taking over the planet in a way that we now realize is problematic. So that’s a good description of us. But so far, we generally are able to keep a grip on the actual artifacts themselves. And it’s important to realize that, you know, banks and governments and militaries, these are all things that are much more complex than any AI system we’re going to build.

Josh Landy
That actually brings me to the second part of Peter’s question. He says cars and phones are robots, people have conversations with them, they obey, they give commands. Why do we resist applying that term? That seems to go very much to what you’re saying, right, that we already have things like the kinds of things we’re talking about.

Joanna Bryson
Absolutely. I think that’s, you know, people say, oh, you know, what, why don’t we have general AI or whatever, there’s nothing more general than Bayes law, which is something that it’s about how you combine information. It’s a mathematically correct, provably correct way to maximize how you make information from other information. So we have incredible AI all around us, but we only think it’s a person if someone builds it to look like a person and to have like a voice and things like that. But absolutely, I say every day that, you know, iPhones or any kind of smartphone is a- it’s a robot, you know, it’s sensing it’s acting, on our behalf sometimes, you know, in fact, actually, some of you probably know about this. One of the biggest arguments in philosophy, decades ago was this thing called the Chinese room argument. And it was started because someone was really offended because someone from MIT said, that thermostats are intelligent, because again, a thermostat senses the environment, and that takes an action. So that’s the basic definition of intelligence.

Ray Briggs
I’m curious about how to think of robots if we don’t think of them as persons. So another way I can think of is to imagine them as extensions of our own capacities. So I think this is like, really pretty plausible with my iPhone. My iPhone makes me able to search more information, to contact more people, and to like, look at a map of my surroundings. So it seems like another way to think about it is as an extension of my capabilities, is that a better framework for understanding what kinds of things robots are and what they do?

Joanna Bryson
Yeah, I think that’s a great way to think about it. But there’s, there’s still a gotcha there, too, which is, so there’s something called value aligned design, which, unfortunately, is an idea from Europe, where the idea is that the AI will somehow learn itself what our morality is, and then be more moral than we are. But what I think value aligned design should be is that it should be about a system correctly expressing the moral decisions of its owner operator. Now, the problem there is that and this is a problem like a Facebook and [unintelligible] or something. It’s that it can’t only be of the owner operator, because we think that the people who built and develop and sell the systems also have obligations. So to some extent, we’re all getting integrated through our technology. But basically, yes, I think you should think of it as an expression of you, of the company that sold you the phone, and the government that’s regulating that technology, all of those things can be expressed through your device.

Ray Briggs
So it seems like sometimes my devices kind of do the opposite of aligning with my values. And they also align with the values of the manufacturer in ways that I don’t necessarily approve of. So I’m thinking of things like social media, sort of rewarding engagement, when that might mean ignoring aspects of your own life, or just technology in general being set up to assume a certain kind of user, that not everybody is, you know, a, like white middle class, ablebodied user. Should I think of that as like, corrupting my own values? Should I think of that as imposed? Like, so is the AI then not an extension of me completely, but an extension of the designer? Like who is it a part of?

Joanna Bryson
That’s a really great set of questions. And that’s exactly the kinds of questions that the European Union’s current legislation is trying to address. So we’re trying to make sure that we understand how the software and the hardware that we have in our homes, how it’s working. And so we can take, not just not so much control of it, because of course, that might be quite complicated, but at least responsibilities. So we can make responsible decisions about whether or not to include this technology in our lives. So that’s the main thing we’re trying to make. We’re just looking for transparency, and openness about these things.

Josh Landy
You’re listening to Philosophy Talk. Today, we’re thinking about robots as persons with Joanna Bryson from the Hertie School of Governance.

Ray Briggs
How can we make future technologies safe for humans? Should you be able to prosecute an AI? Do the laws of robotics need an update for the 21st century?

Josh Landy
Android dreams, or electric nightmares, plus commentary from Ian Shoales, the Sixty-Second Philosopher when Philosophy Talk continues.

If we all end up with electric friends, will we get mad when they forget our birthdays? I’m Josh Landy. And this is Philosophy Talk, the program that questions everything…

Ray Briggs
…except your intelligence. I’m Ray Briggs, our guest is Joanna Bryson from the Hertie School of Governance. And we’re asking, could robots be persons?

Josh Landy
It’s part of our series, The Human and the Machine sponsored by HAI, the Stanford Institute for Human Centered Artificial Intelligence. Learn more about them on their website, hai.stanford.edu.

Ray Briggs
So Joanna, what developments do you see coming up in robot law over the next 10 to 20 years?

Joanna Bryson
Well, like I said, in the EU, we expect in the next few years, there to be laws about making sure that, that we can know how the algorithms are working, that at least we can have governments inspect and ensure them just like we do with medicine. And generally, we’re just expecting the entire sector of the digital technology sectors to sort of join with the rest of manufacturing, and in a reasonable amount of regulation. So I think that’s the most immediate thing. If you’re thinking about the longer term, one of the most urgent considerations is if we keep getting these things, these devices provided from relatively small numbers of manufacturers, how do we ensure that appropriate levels of redistribution, especially of wealth that’s being taken from other data that’s derived from these systems? How does that work? So I think that there’s going to be a lot of international agreements. And in fact, UNESCO has had 193 countries sign up and agree with their AI ethics principles, which incidentally, say that there won’t be legal personality for AI.

Josh Landy
Interesting. It sounds like there’s two strands to possible legislation, right? One, one strand is designed to protect us from AI, of and or, you know, synthetic persons, and the other is designed to protect them from from us, right, or potentially could be, right. So let’s make sure that nobody actually makes a synthetic person that would then be owned, because obviously, that would simply be a form of slavery.

Joanna Bryson
I don’t think there’s going to be laws about that, because I, generally speaking, we don’t make laws about things that don’t happen. So I don’t think that’s really going to happen. I think the main thing is the laws that are making it clear so that people can understand and assure themselves, either themselves themselves or through people they trust or institutions they trust, that the systems don’t need that kind of consideration, and that they are adequately backed up, you know that they’re safe. That they’re safe products, and so they can understand them appropriately.

Josh Landy
That all makes perfect sense. I have to say I’m personally somewhat skeptical along with you that, you know, we’ll ever be in a position to design a robot that genuinely feels pain or something like that. But let me press it a little bit like, wouldn’t people want to design robots that are person-like in some regards to, for example, caregiving robots, wouldn’t it be advantageous if they had empathy for the patients that they were helping? Robo-therapists, I can imagine military robots being sort of more efficient or something like more helpful if they were able to take initiative. So I’m sort of wondering, won’t some people rightly or wrongly, feel motivated to try to design robots that have all the things that at least up until now we’ve associated with personhood?

Joanna Bryson
Okay, there’s three at least three answers to your question. So I’m going back to- if you want that, okay. I’m gonna focus mostly on the first two things. As I mentioned before, there are definitely people who want to design and own and turn off and on their partners. And this is, I think, a psychological issue, not really a technological issue. But when we’re talking about empathy and care and having something that helps users, then I think there’s a real set of questions that need to be taken apart about that. But first of all, it’s not that even if we have something totally transparent, like, let’s say, movies, we all know that movies have scripts, that there’s actors, that the actors are not the same as the characters they portray. And nevertheless, we can cry, and we can laugh, and we can fantasize about the different characters, and we can really, really get emotionally engaged with it. Now, in fact, I say we all know, but when we act, when we meet an actor, some people act as if they can’t discriminate between the actor and the character. So that’s just a, you know, a fact about people. So I think that’s the stage. That’s the stage we want to get to with AI. It’s not that people everyone will understand it. There are some people that think their doorknobs understand them, you know, that we’ll never get to where everybody thinks that the robot’s not a person. But what we want is that families as a whole can make intelligent decisions about what’s going on. And I just want to say quickly about empathy. You know, Amazon, when it’s making a recommendation, it’s not recommending by thinking about how does the software feel about a record, it looks up another person, how another person thought about the record. So it’s like outsourced empathy. And absolutely, we can do that with AI. We already do.

Ray Briggs
Right. So I like this point. And I’m kind of thinking about other cases where things appear to deserve our empathy even though they don’t really seem like social relationships, even though they aren’t. I’m thinking like there are a lot of those that are in my life that are like seem pretty unproblematic. So I think that, like any kind of like small, like podcast, or internet celebrity acts a little bit like they have a social relationship with their fan, but that’s really one-sided. And I think, also just artifacts like like dolls are artifacts that like people can kind of pretend to have social relationships with. And as long as we know that there’s some suspension of disbelief going on, that’s all fine.

Joanna Bryson
Right, exactly. And there are artifacts that we put a huge amount of investment in, you know, like, works of art. And so I could easily, you know, in fact, I would say already, there are people that include artificial intelligence into works of art. And that’s another kind of value. But that what I find problematic is when this is done in a deceptive way. So when you trick people into spending too much time on the AI, when you trick people into unfortunately, there’s people going around, people that are quite famous, going around telling people oh, be afraid, because the robot is the one that’s learning all about you. And we need to figure out how to control those robots, when in fact, it’s their companies that are learning all about it. So okay, I’m talking about Eric Schmidt here. He just did this on the BBC, you know, and it’s like, how dare you say that the robot is the one that’s learning when we know it’s Google that’s learning, right? So that’s the kind of thing that really bothers me.

Ray Briggs
So that’s really interesting to me, actually. Because there you just talked about Google, like Google is an agent that can learn things and have sort of designs and desires. A Google isn’t a human being like you or me. Do you think like, there’s a reason to talk about sort of corporations as agents, even though robots aren’t agents? And how should I understand that?

Joanna Bryson
Yeah, that’s a super interesting question. I love that one. So first of all, some people like again, in terms of philosophers, [unintelligible] have proposed that we should think about corporations and governments as AI, because no one you know really understands how they work and they’re kind of distorted. I don’t think that’s that useful. And in fact, one of my co authors, Milhailis Diamantis, spends a lot of time looking at how people are currently exploiting the law and corrupting the law by pretending that corporate law, corporate legal persons really are humans. And so that like what their mental state is matters and things like that. But the way that a corporation really is a legal entity and why it’s okay to talk about it more or less as a person, as a legal person, is because they’re composed of legal persons. And so a corporation is a legal person exactly to the extent that it is dissuaded by law that would dissuade people, that’s my sort of moral consideration. So I think there’s an overextension of legal personality right now. And that’s why we, that’s why we’ve see this increasing number of shell companie. A shell company is where, you know, someone sets it up to go bankrupt, and no one cares. I mean, probably the janitors cared, and they didn’t know it was a shell company. But the people who had executive power, didn’t care or in fact, meant to do that to do to do something bad with money laundering. So that’s why I don’t want AI. AI would be the ultimate shell company.

Josh Landy
That’s nice. Listen Joanna, I want to come back to something really interesting you were saying earlier about how the way in which we talk about artificial intelligence, robots and so on, says something about us as human beings, right? You were saying a couple of things. One, that you know, there are clearly some people out there who want a partner they can turn off. And another that oftentimes the way in which folks have talked about what makes human beings human is just a kind of a justification for the way in which we treat non-human animals and outgroups. What is our current state of thinking about AI tell us about humanity, about the way in which you know, the flawed flesh that is human nature thinks about the world?

Joanna Bryson
I don’t think there is enough thinking about this so far. People are so confused about technology. And as I mentioned, some of the leading technologists are deliberately muddying the waters. So I don’t think there’s been enough of this stuff. I would recommend people like Lucy Suchman who’ve really been working, you know, for years trying to show like, how could you even imagine by metaphor, calling a robotic caretaker. Look at the difference between like, you know, a humanoid robot that can barely dispense a pill, and what it would mean for a nurse to come by and see you and, and empathize with you. So I, there are some people that are kind of thinking about this. But oh, and I know another person, Kate Devlin has written some really interesting things about why is it that people are much more comfortable with female voice assistants than male voice assistants. And it comes down to whether or not you’re comfortable bossing something around, and you’re, then people are more comfortable telling women what to do then telling men what to do.

Ray Briggs
That’s sobering. So can I ask you, if there’s one thing that you wish that our listeners would take away about robots and personhood, like one misconception that you’d like to correct, what would that be?

Joanna Bryson
That there’s any connection between robots and personhood.

Josh Landy
Simple enough. But is there a particular aspect of it that you think you know, you started off by telling us that people would say you couldn’t unplug a robot that didn’t even work? I mean, is that the central worry? Or what’s the central worry here?

Joanna Bryson
I think the main thing I would like people to understand is that when you buy something, you have a right to understand how it works. And you have a right to understand you have a right to be sure that you’ve bought something that’s safe. And you know that’s the most important idea and these other ideas, maybe it’s not the best way to make yourself less lonely.

Josh Landy
On that note, Joanna, I want to thank you. This has been the opposite of lonely, you’ve been fantastic company for us today. Thank you so much for joining us.

Joanna Bryson
Thank you.

Josh Landy
Our guest has been Joanna Bryson, Professor of Ethics and Technology at the Hertie School of Governance in Berlin. So what are you thinking now, Ray?

Ray Briggs
Well, I just really want a safe and well made robot to grade all of my papers for me.

Josh Landy
Yeah, that’d be nice. But you know, we can grade our papers. Who’s gonna stop it from teaching our classes?

Ray Briggs
Oof.

Josh Landy
Or writing our books.

Ray Briggs
I’m more worried about my recorded lectures being used for that.

Josh Landy
Fair point. Yeah, Google is gonna get them. We’re gonna put links to everything we’ve mentioned today on our website, philosophytalk.org, where you can also become a subscriber and gain access to our library of more than 500 episodes.

Ray Briggs
You can also listen and learn more about other episodes in The Human and the Machine series at philosophytalk.org/human-and-machine.

Josh Landy
Now, so fast he may be more machine than man. It’s Ian Shoales, the Sixty-Second Philosopher

Ian Shoales
Ian Shoales. Robots will soon be among us humans. We’ll give them names like Siri and Alexa and find out what they want. Let’s call them pals. Okay, I want to have a robot vacuum cleaner love it, bring it on. If robots are in the show, we don’t have to know anything. They know it for us, we can spend our time watching old movies and eating beans whenever they can. So we need to train robots to be more interactive so we have somebody to talk to. Start them young or right out of the box. Approach toddler robots the same way we approach special needs kids: give them confidence, self esteem, skills unique to them, how to be human in a harsh world because now, thanks to various cultural resentments, we’re all special needs. We have allergies for math-phobic, need special attention, tutor counseling. Our family is toxic. We have children with guns taking up sniping and the teachable moments are only available on the Hallmark Channel. Various cultural resentments prevent little Johnny from knowing which side his bread is buttered on because Wonder Bread was canceled. And all the bad comes from hippie health food stores with big chunks of something stuck in it and we’re all lactose intolerant. Robots do not have these issues, no peanut butter and jelly sandwiches they do not eat. If they did, they would convert food into fuel rapidly and not throw up that because he got super chunky and so smooth, and it would be great jelly from here to eternity. Replacing students with robots would make everybody happier. No masks, no vaccines, they sit where they’re told… If their job is to clean the city sooner or later, they’ll realize the city is like Kenya with no people in it. And the robot massacre begins. We’ve seen that movie so many times. And by now we realize that robots from the future are not coming back to save us. We also realize this is the future. The weird cultural fear we have now is the fear of psychopaths. Watch true crime shows or lifetime movies and they’re everywhere, man. There’s a psychopathic spectrum that goes from sexy narcissism to I’m calling 911 Bob, I mean it, with a lot in between. The fear is really I believe that empathy is endangered by psychopaths and can sometimes be contagious. And if you look into that abyss, it looks into you, so why bother even to have a relationship? In other words, if you imitate weeping, it can rub off and before you know it, you’re convincing yourself you’re sad, when you would know a genuine emotion if it bit you, you narcissist. That’s what she said. That’s the takeaway I’m getting. So we need to do another weird human trick. Convince robots they’re human. So they’ll accept responsibility for something that is our fault for putting them there in the first place. In other words, put robots on school boards. We want solutions for school shootings to come from teachers and students which is ridiculous because their the damn victims. Let robots do it, arm them with lasers, and a criminal profiler app, turn them loose, eventually turned unused school houses into prisons. Running a prison would be a perfect job for a robot. It could be to a killer mini series on Netflix if the robot is good enough. The Tin Man finally found a heart and now he’s a prison warden. The key to all is working is to teach robots how to cry. Don’t make them cry. You’ll short out the circuits, but teach them. Like the old fable about teaching a man to fish, robots could have tears for a lifetime if they’re oil based. Luckily robots already have good sense. Most humans seem insane to me, frankly, robots just do what they always do: pick things up underwater, perform microsurgeries, clean the house, play a song, a robot tear maybe a robotic God bless you Mr Shoales as they wheel me into traffic, I mean, into the home. I’ll be happy. The downside? Teach robots to be more human. They might start to think that maybe humans aren’t as human as they are, and decide that all who are not fully human must be exterminated. I think we’ve seen that movie a time or two as well. Only one solution: let robots make all the movies long as they feel useful. That’s what I’m saying. They’ll probably leave us alone. I gotta go.

Josh Landy
Philosophy Talk is a presentation of KALW San Francisco Bay Area and the trustees of Leland Stanford Junior University, copyright 2022.

Ray Briggs
Our executive producer is Ben Trefny. The senior producer is Devon Strolovitch. Laura Maguire is our Director of Research.

Josh Landy
Thanks also to Merle Kessler and Angela Johnston.

Ray Briggs
Support for Philosophy Talk comes from various groups at Stanford University, and from subscribers to our online community of thinkers. Support for this episode comes from the Stanford Institute for Human Centered AI.

Josh Landy
The views expressed or misexpressed on this program do not necessarily represent the opinions of Stanford University or of our other funders.

Ray Briggs
…not even when they’re true and reasonable.

Josh Landy
The conversation continues on our website, philosophytalk.org where you can become a subscriber and get access to our library of more than 500 episodes. I’m Josh Landy.

Ray Briggs
And I’m Ray Briggs. Thank you for listening.

Josh Landy
And thank you for thinking.

Marvin the Paranoid Android
I think you ought to know I’m feeling very depressed.

Unknown Speaker
Here’s something to occupy you and take your mind off.

Marvin the Paranoid Android
It won’t work. I have an exceptionally large mind.

Guest

joannabrysoncwoutervanvooren-formatkey-jpg-w320m
Joanna Bryson, Professor of Ethics and Technology, The Hertie School of Governance

Related Blogs

  • Digital Persons?

    January 7, 2022

Related Resources

Books

Weiner, Norbert (1948). Cybernetics: Or Control and Communication in the Animal and the Machine.

Web Resources

Johnson, Kenneth and Badham, John (1986). Short Circuit.

Get Philosophy Talk

Radio

Sunday at 11am (Pacific) on KALW 91.7 FM, San Francisco, and rebroadcast on many other stations nationwide

Podcast