The Social Lives of Robots
February 18, 2024
First Aired: November 14, 2021
Listen
Machines might surpass humans in terms of computational intelligence, but when it comes to social intelligence, they’re not very sophisticated. They have difficulty reading subtle cues—like body language, eye gaze, or facial expression—that we pick up on automatically. As robots integrate more and more into human life, how will they figure out the codes for appropriate behavior in different contexts? Can social intelligence be learned via an algorithm? And how do we design socially smart robots to be of special assistance to children, older adults, and people with disabilities? Josh and Ray read the room with Elaine Short from Tufts University, co-author of more than 20 papers on human-robot interaction, including “No fair!! An interaction with a cheating robot.”
Part of our series The Human and the Machine.
Can robots learn to read social cues? Is it possible to derive empathy from an algorithm? Ray questions the necessity of socially intelligent robots for jobs that humans already do well. Plus, she is skeptical about how numbers and computing data can create something as complex as social intelligence. Josh, however, argues that it is useful for robots to learn how to read social cues because they must navigate different environments and spaces.
The philosophers are joined by Elaine Short, Professor of Computer Science at Tufts University. Elaine’s work focuses on robots that help and learn from people, as well what happens when robots exit the lab and enter the world. Ray asks if robots are truly learning social intelligence or if they’re simply simulating humans, but Elaine considers the distinction to be unimportant in her field. Josh asks about the success of companionship robots, which leads Elaine to describe the success of animal and zoomorphic robots. She believes that humanoid companionship robots will still take time to develop, especially since a large problem in social robotics lies in managing human expectations.
In the last segment of the show, Josh, Ray, and Elaine consider the tension between popular science and sci-fi representations of robots versus how they actually operate. They look at various weaknesses of socially assistive robots, such as their potential to make mistakes, accidental emotional harms, accessibility, and high costs. Elaine emphasizes the importance of increasing diversity in robotics and computing, and she explains how assistive robots can aid in disability rights and empowering people with disabilities.
Roving Philosophical Report (Seek to 4:27) → Holly J. McDede discusses how a robotic bee and a robot designed to help kids with autism spectrum disorder are impacting the social lives of their respective communities.
Sixty-Second Philosopher (Seek to 49:01) → Ian Shoales examines the diversity and tropes of robots in pop culture.
Josh Landy
Could a robot ever really understand you?
Ray Briggs
Could it at least help your kid s play well with others?
Josh Landy
Would you trust a robot to take care of your child?
Ray Briggs
Welcome to Philosophy Talk, the program that questions everything—
Josh Landy
—except your intelligence. I’m Josh Landy.
Ray Briggs
And I’m Ray Briggs. We’re coming to you via the studios of KALW San Francisco Bay Area,
Josh Landy
continuing conversations that began at the philosopher’s corner on the Stanford campus, where Ray teaches philosophy, and I direct the Philosophy and Literature initiative.
Ray Briggs
Today, it’s the first episode in our new series, “The Human in the Machine,” generously sponsored by HAI, the Stanford Institute for Human Centered Artificial Intelligence.
Josh Landy
And we’re kicking off this series by thinking about the social lives of robots.
Ray Briggs
The social lives of robots? What, are they hanging out with other robots going to robot dinner parties or something?
Josh Landy
That sounds like a cool party. I hope I get invited. No, the thought is that robots are interacting with human beings more and more, and we are social creatures. So, if robots are going to end up assisting us, they better learn to read social cues, you know, like knowing what we’re looking at, what our facial expressions mean. And they’d also better learn to behave so that we feel comfortable interacting with them.
Ray Briggs
Ah right, so we’re talking about developing socially intelligent robots.
Josh Landy
Exactly.
Ray Briggs
Okay, I see what you’re saying. But, I don’t see the point. So, computers are amazing and lots of kinds of intelligence, you know, they can calculate pi to 1000 digits, they can let you order pizza with a click of a button, they can show you what you’d look like if your face got combined with an ocelot. I mean, why do we also need them to be our best friend?
Josh Landy
Well, we’re not really talking about computers here, right? We’re talking about robots. A robot has a kind of body and it can move around in space. So, robots need to be able to perceive and navigate different kinds of environment. And they need to be able to figure out how to behave themselves in those different environments.
Ray Briggs
Sure sure, robots are not laptops, I get that. But, robots are like laptops. I mean, robotic intelligence is computational. Their behavior is just a result of ones and zeros. How do you get social intelligence out of that?
Josh Landy
Well, okay, I’m no computer scientist. But, I don’t know, can’t we develop algorithms that allow robots to model and mimic human behavior? Isn’t that what machine learning is all about?
Ray Briggs
Yeah, clearly that’s the goal. But, how are you going to do that? Our social intelligence doesn’t just come from manipulating numbers and computing data. We have empathy, we perceive meaning and facial expressions, we just naturally follow each other as eye gaze. Even a newborn baby has more social skills than a robot.
Josh Landy
Well, sure, okay. Socially intelligent robots are going to be very different from human beings. I mean, we’re talking about artificial intelligence after all.
Ray Briggs
Haha, so you’re admitting that it’s never going to be as good as a real human being? Why bother, then? Robots already help us deliver packages, and like, weld stuff? Do we really need them to be nurses and teachers? Let’s leave those jobs to the real people.
Josh Landy
Oh, well I don’t know, I think there’s all sorts of reasons to be enthusiastic about robot nurses. I mean, look, a ton of nursing work is hugely physically demanding, like, you know, helping people sit up or get out of bed. Human nurses could do other stuff. You know, that stuff where the human touch is really important. And let the robots do all the heavy lifting.
Ray Briggs
Yeah, but if that’s all the robot is doing, why does it need to have social intelligence? I mean, why not just work on designing robots that are really good at lifting people out of bed?
Josh Landy
Well, because patients are individuals. We don’t all have the same needs as each other. Let’s say your robot’s lifting someone out of bed, and they feel discomfort somewhere in their body. A really good robot would be able to know that just by looking at them and immediately adjust what it’s doing.
Ray Briggs
Okay, so you’re saying if robots are going to be interacting with us and assisting us with all these tasks, they need to be able to anticipate our needs, based on things like body language, and that’s why they need to be socially intelligent.
Josh Landy
Yeah, exactly. And that’s why I’m so excited to talk to our guest this week. Elaine Short, a computer scientist who actually works on designing socially assistive robots.
Ray Briggs
And she’s going to tell us how to get social intelligence out of an algorithm?
Josh Landy
I hope so! In the meantime, we sent out Roving Philosophical Reporter Holly J. McDede to find examples of how robots are impacting the social lives of humans and other creatures. She files this report.
Holly McDede
Before we get to the social lives of humans, we’ll start with another creature known for sophisticated rituals, complex societies and brains. Bees.
Tim Landgraf
There’s so much to learn, and it’s just, you know, quote unquote, “just an insect.” Though, I think they almost have 1 million neurons in their brains.
Holly McDede
Tim Landgraf is a roboticist at the Free University of Berlin. A few years ago, he began using robots to better understand this thing bees do called “the waggle dance.”
Tim Landgraf
It basically looks like the bees [are] shaking from side to side, and then turning left, and doing that waggle again, and turn right… So, it kind of looks like a shape of a figure eight.
Holly McDede
It’s not just the dance, it’s a way to communicate with other bees where the food is.
Tim Landgraf
It’s not just, “Hey, I found food.” It’s, “Hey, I found food. It’s really good. You got to go that direction, and you got to find it that far. And then you’ll find something there.”
Holly McDede
So, he got to work designing a bee robot that would replicate the waggle dance. But it turns out, bees won’t listen to just any old robot. In fact, they had a very different reaction.
Tim Landgraf
All the things that we tried putting in the first time was kind of attacked, and they tried to drag it out. They bite it and by biting it, they have pheromones that marked those bites, and then it starts that everybody’s coming in. Bites and nags and drags it away.
Holly McDede
This work took years. Tim remembers the first time the experiment finally seemed to work.
Tim Landgraf
We saw bees actually running after the dancing robot. And they do that in a very specific kind of pattern. And I remember I was like, I was freezing. I was like, I don’t know what to do. Should I continue here?
Holly McDede
The thing is, sometimes the bees responded to the dance in the way the researchers wanted. Sometimes they didn’t.
Tim Landgraf
And why that is, is it because the dance, the robot dance, was not nice enough? Or not good enough?
Holly McDede
Our desire to use robots understand bees says a lot about us. Humans are curious and sometimes meddlesome. But researchers are also designing robots to help improve how we interact with each other.
Brian Scassellati
Robots offer a unique kind of stimulus. They’re odd in that they, in many ways, seem lifelike, but they’re not really alive.
Holly McDede
Brian Scassellati is a professor of computer science and mechanical engineering at Yale University. He builds robots too. Not because he’s interested in technology, but because he’s interested in understanding people.
Robot
I’m so happy to finally meet you. I’m so excited about being friends.
Holly McDede
In a study from 2018, Brian brought a robot into the homes of kids with autism spectrum disorder. The robot worked with the child for 30 minutes every day. Its job was to interact with the kids and their caregivers through games and storytelling.
Robot
Let’s pick up from where we left last time: move each item into its proper box.
Holly McDede
All the children who worked with a robot showed gains in their social behavior, like they got better eye contact or initiating communication.
Brian Scassellati
So why do children with autism spectrum disorder respond in this way to robots? I thought I had the answer to that eight times now, over the last two decades. We’ve tested seven of these ideas, and they were absolutely false.
Holly McDede
His guess is that robots don’t trigger complex social behavior.
Brian Scassellati
When the robot does something, for example, like turn and look at you and make eye contact, you respond very naturally to it and it triggers that social response in you. But it’s not so socially complex, that you feel embarrassed by what the robot thinks, or feel that the robot is judging you in some way.
Holly McDede
For many of the kids, the robots gave them opportunities to do what they might not be willing to do with another person.
Brian Scassellati
That experimental sort of attitude, allows them to train that social behavior and then transfer it to interactions with people.
Holly McDede
And that relates back to his mission to use robots not to change the physical world, but to help people like a good coach or therapist might.
Robot
I think we’ll make lots of great memories together over the next four weeks. Each day, I will have some games to play. I’m really excited to play. Which game would you like to play first today?
Holly McDede
Now, if only the bee robot and this robot could teach each other social skills or how to waggle dance. For Philosophy Talk, I’m Holly D. McDede.
Josh Landy
Thanks for that really fascinating report, Holly. I’m Josh Landy, with me is my Stanford colleague, Ray Briggs, and today we’re thinking about the social lives of robots.
Ray Briggs
We’re joined now by Elaine Short. She’s a professor of computer science at Tufts University, and co-author of more than 20 papers on human robot interaction, including “No Fair!”, an interaction with a cheating robot. Elaine, welcome to Philosophy Talk.
Elaine Short
Hi, thank you for having me.
Josh Landy
Elaine, social robotics is such a fascinating new research area. Can you tell us a bit about how you got involved with it?
Elaine Short
Yeah, so I actually started out my college career as a biomedical engineering major, and then very quickly discovered that I am both squeamish and hate physics. So, I quickly switched into computer science where I wouldn’t have to do any labs with dissections and where I wouldn’t have to take any more physics classes. But I am the kind of person that I like hard, interesting problems. And as a computer scientist, I started thinking about if I have to spend the rest of my career doing something, 30-40 years, what’s the absolute hardest, most interesting problem that I can think of to work on? And social robotics was the thing that I thought would be the most likely to keep me interested for all that time.
Ray Briggs
So Elaine, for listeners who might not be familiar with social robotics, can you tell us a little bit about the kinds of things you work on?
Elaine Short
So I am fundamentally interested in robots that help people, and not just help people, but help people without being annoying, or frustrating, or obnoxious, and that people actually want to use. There’s a lot of problems that go into that. I think a lot about robots that learn from people, especially people who aren’t robotics experts. And I’m really interested these days in getting robots out of the lab and into the world, where things are maybe a little bit less controlled, and a little bit more, from my perspective, exciting and hard and interesting.
Ray Briggs
So if robots are learning from people, does that conduce to the goal of them not being annoying? I would think that like, people do a lot of annoying things.
Elaine Short
You know, there’s annoying, like a person, and then there’s annoying, like a broken piece of technology. And I think, you know, if my robots could be more annoying like a person and less annoying like when your computer won’t connect to the Internet, then I would feel pretty good about that, actually. Although, of course, we hope that that the robots are also maybe being like a friend and not like, you know, that guy that you can’t stand.
Josh Landy
So what’s a good example, Elaine, of a of a current robot that’s helping people out and is more like a friend than a broken toaster?
Elaine Short
So in addition to, I guess, I hope, my robots, I am pretty excited. My postdoc advisor, Andrea Thomaz has a company that does robots that work in hospitals, and doing fetching and delivering. But one of the neat things about this robot is that they’ve put a face on it, because it turns out that people would like to, you know, have that robot be social and be able to interact with people. So that would be an example of a robot that’s really out in the world right now doing useful things for people and doing social tasks.
Josh Landy
And what about one of your robots? Can you tell us about one of those?
Elaine Short
Yeah, so we have actually, we’ll call them the robot twins, Beep and Boop, who are two instances of the same robot. And we have them in the lab working on being able to learn from people, especially people who aren’t experts. So that means being able to learn much faster than normally your typical machine learning algorithms can learn. One of the ways we do that is that robots, unlike computers, can actually interact with the world. So we can have the robots learn some things by interacting directly with the world and learn other things, especially things about maybe preferences that a user might have, by interacting with a person. And by combining those two sources of information, you can learn a lot faster than you would be able to learn from either alone.
Josh Landy
You’re listening to Philosophy Talk. Today we’re thinking about the social lives of robots with Elaine Short from Tufts University.
Ray Briggs
Can robots help children on the spectrum develop social skills? Will they give people with disabilities more autonomy? Could cooperating with a robot make you more cooperative?
Josh Landy
Welcoming our socially assistive robot overlords—along with your comments and questions—when Philosophy Talk continues.
Ray Briggs
On Wednesday, November 17, you can join the Stanford Institute for Human Centered Artificial Intelligence for a virtual workshop on datacentric AI.
Josh Landy
Algorithms are only as good as the data we feed them. How can we make sure those data are free from bias?
Ray Briggs
This event is free and open to the public. More information at hai.stanford.edu/events.
Josh Landy
I’m Josh Landy, and this is Philosophy Talk, the program that questions everything.
Ray Briggs
Except your intelligence. I’m Ray Briggs, and we’re thinking about the social lives of robots with Elaine Short, from Tufts University, [as] part of our series, “The Human and the Machine,” sponsored by HAI, the Stanford Institute for Human Centered Artificial Intelligence.
Josh Landy
We’re pre recording this episode, and unfortunately, we can’t take your phone calls. But you can always email us at comments@philosophytalk.org, or you can comment on our website, where you can also become a subscriber and gain access to our library of more than 500 episodes.
Ray Briggs
So Elaine, your research is sounding really cool. But it also sounds really difficult. What would you consider to be the biggest challenge you face in your work?
Elaine Short
So, you know, as you say, social robotics is fundamentally a difficult problem. Like I said earlier, that was what attracted me to it. And, you know, at a high level, what I have to do, is take things like being friendly, everything from how you move your arms to where to look, to what to say, to how to tell what the other person in the interaction is doing and thinking, and I have to explain that to a computer. And if you know anything about computers, you know that computers are all numbers. And in fact, all computers can really do is add, subtract, multiply, divide, compare numbers, move numbers around, and interpret some numbers as meaning that you’re supposed to do one of the previous things. And the thing that makes computers impressive is that they can do that a billion times a second. So we have to take, you know, all these really hard, complicated things about social interaction, and we have to put it into numbers, into math, into numerical models.
Ray Briggs
So are you really teaching the robot to be socially intelligent? Or is this sort of more like simulating the things that a person does, but having the robot doing them for different reasons? Like is that a distinction that makes sense in your work?
Elaine Short
So that’s not a distinction that I really make. I am, in some ways, what I’m really interested in at the end of the day is people. What people need, what people want, what people are expecting. And so when we talk about social intelligence for robots, or when I talk about social intelligence for robots, what I really mean is equipping these computers, these computers that have bodies, with the models and algorithms that they need to give people what they’re looking for, whether that’s exactly equivalent to human social intelligence or not.
Josh Landy
But it does seem like such an enormous challenge, right? Because I mean, as Ray was saying earlier, you know, human beings, we’re just sort of lucky. We tend to sort of come equipped with certain capacities, to kind of read faces, for example, instinctively and things like that. And every tiny little movement of a facial muscle sort of combines with contextual information, right, in all these myriad ways. And, and so one little eyebrow raised in this context can mean, I don’t know, suspicion, and that could mean amusement, and how do you program all that in?
Elaine Short
Well, isn’t that great, that I have so many years to work on this? Yeah, exactly. It’s really hard. And so we often narrow the problem down. So robots are still pretty bad at even things that you might think of as being fundamental to a social interaction. So I have one student who is working on thinking about attention. So how do you tell when someone’s paying attention to you? Or if maybe you’re in a group of robots, can you work together with the other robots to figure out which robot the person is paying attention to? So that’s a pretty well defined social skill that we’re looking for from a robot, and that’s typically, I guess, step zero. I’m a computer scientist — we start counting at zero. Step zero of building social intelligence in a robot is to narrow down which part of the problem you’re really looking at.
Josh Landy
What about cases of companionship, right? I mean, I’ve read about the PARO robot that looks like a harp seal and that keeps people company. Has that been successful?
Elaine Short
I think where it has been successful tends to be in a, what we would call, zoomorphic robots. So, the animal robots, and somewhat with the humanoid robots. But PARO, the nice thing about PARO is it doesn’t set your expectations that it’s going to have a conversation with you. As soon as you put a face on a robot, it’s got eyes, it’s got a mouth, it maybe looks a little bit like a person, people start expecting that it’s going to be, you know, talking to them. And then it can be a little disappointing if it turns out that it’s basically running your standard voice assistant, whose names we can’t say or will wake up everybody’s phones. It’s basically operating at that level from a, I don’t know, a conversational perspective. So I think, you know, the place where that’s the most successful is with those sort of cute fluffy robots. Those are great for the kind of companionship piece. But there’s still a lot of work to be done before we have humanoid robots that you would really want to be your companion any significant amount at the time.
Ray Briggs
So it sounds like one of the big things about about social robots is managing the human beings’ expectations. Is one way to make robots more social just to cue the people around them to respond to them in ways that they can handle?
Elaine Short
Yeah, so I think you don’t even need to add the word “expectations.” to the end of that. You can just say, one of the hardest problems in social robotics is managing the human. That is a big problem. People will do fascinating things. And yeah, a lot of what we think about is making sure that people are going to interact with the robot in ways that it expects, and by “expects,” I don’t mean that the robot is, you know, having some kind of deep thoughts. I mean, it’s what it’s programmed to do. In a lot of the work, I think the richness of the interaction best comes from humans. If you want to have a rich, interesting interaction where a lot of different things get said, robots aren’t great at that unless you spend a lot of time writing scripts. And I don’t know if you can imagine a computer scientist trying to write dialogue, but it doesn’t work very well. But humans are great at coming up with lots of, you know, creative, interesting things, creating that, like I said, richness. And so letting the richness come from the humans and let robots do the kinds of things they’re good at but while really trying to understand what’s happening with the humans, and trying to have enough social intelligence that they don’t come across as being, you know, obnoxious or annoying. That, I think, is the sweet spot for me right now.
Ray Briggs
You’re listening to Philosophy Talk. Today, we’re thinking about socially intelligent robots with computer scientist Elaine Short, from Tufts University. So Elaine, I want to hear more about surprising things that humans do with robots. Do you have any stories about curveballs you’ve encountered in your lab or out in the world?
Elaine Short
Yeah, so I guess the most recent example, or a recent example, is that I had a robot actually out in a public space, in a building, in the atrium of an engineering building. And someone came up to the robot and saw the the on off switch on the robots arm and flipped it off. They flipped the switch up, to be clear, and I guess they just saw a button and wanted to know what it did. And this particular robot because of the way the software was configured and some quirks of the computer that was running the robot, that actually crashed the entire robot, and we had to turn the whole thing off, turn the whole thing back on, and basically restart everything. So that’s, you know, one example I like to use.
Josh Landy
I just love that the problem with human-robot interactions is the humans. It makes perfect sense. Elaine, we have a couple of emails from listeners. One’s a comment; one’s a question. The comment’s from Susan in Palo Alto. Susan says, “I am old and funnily enough, not threatened by robots, primarily because we have so much to do in this world, we actually need them and should be forming alliances. For example, at USC, in their robotics division, scientists have taught robots how to lead sessions for elders who experienced deficits from strokes to do their repetitive exercises. What a boon.” That’s a great comment. Thank you, Susan. And we have a question from Bernard who says, “Aren’t AIs basically just math equations made into a computer program? They’re taken to know more about us than we do ourselves by way of data farming and algorithm building?” But, Bernard’s skeptical about this. So what do you think, Elaine? Can any algorithm really capture the full range of the human experience? Are we making a mistake in in trusting algorithms too much?
Elaine Short
Yeah, so first, I want to thank Susan for the comment, because she’s actually describing a paper that I worked on. So I definitely appreciate that. That was very cool research. I think it comes back to, in robotics, there’s a weird tension between the media, popular science, sci-fi conception of robotics and what robots are actually capable of. So when you say that a robot is a collection of algorithms and data and models, I think, yeah, exactly. That’s the point. Right, the point is that we are building models of social behavior. And I think the disconnect happens in what people expect. Well, because it has a lot of data and because there was a lot of computation that went into it, that it’s somehow now more intelligent than a human, which is really not the case. And it’s often, you know, robots are good, computers are good at the different things that people are good at. So they’re really good at remembering everything that has ever happened to them exactly. Right, we just write the video data to the hard drive and now we can go back and we have perfect photographic memory. But we really don’t know what any of it means. And that it turns out to be a really hard problem. So when people see a robot that does something really cool that’s hard for a person, or a computer that beat someone at chess or wins at jeopardy, you know, the thought is like, “wow, that’s so cool!” But then you put a robot in a competition with a person that things like, seeing the world and understanding it, or picking things up. And, you know, most people, the person is going to win every time. So I guess I would say, I think it’s okay that it’s algorithm. So that’s the point. That’s what we’re trying to do. And don’t let the Sci-Fi mental image of what robots are throw you off and think that that somehow means that, you know, robots are superior in every way, versus we’re trying to solve a hard problem with computers.
Josh Landy
Yeah, so I guess I want to raise a specific worry. So I totally agree, strengths and weaknesses, right? You could have a robot assistant stay awake 24 hours a day, seven days a week. How amazing is that for someone who has a lot of, you know, needs, health needs, or something like that. Conversely, I’d worry about some of the issues that have been sort of percolating about algorithmic bias, right. So there obviously is a lot of, unfortunately, a lot of bigotry in our society. So if you just kind of, you know, feed an algorithm data coming from, you know, the Internet or, just sort of a, you know, that sort of cross section of the population, unfortunately, they can often end up making some pretty egregious mistakes. We’ve seen that with, you know, facial recognition software, and, you know, there was that medical chatbot that was designed that ended up telling a suicidal patient to end their life—and so I wonder, you know, is that something you’re worried about specifically? In other words, in cases, for example, where you have a robot that’s designed to help a kid on the spectrum, develop social skills, is that something that worries you? You know, how are we going to design it so that its algorithm is socially sensitive?
Elaine Short
Yeah, so I think the nice thing that people who spend a lot of time with machine learning algorithms know is, sometimes the saying is “garbage in, garbage out.” But it really means, you have to think deeply about what you’re modeling. Remember, I said step zero is to narrow down the problem? Part of that is when you say, “Oh, I’m going to model this thing, I’m going to imitate, what am I imitating? And is that actually what I want?” So, you know, you brought up the example of autistic kids. And, you know, many autistic adults prefer the identity first language over person first language, so I’m going to use that. But we could have a whole separate conversation on that. And, you know, the question becomes: what do we mean by “train social skills”? How do we do that in a way that is supporting, you know, for example, the neurodiversity movement that is being just and good and doing good in the world? That’s sort of comes in at the defining the problem stage. And so, you know, I think a lot about trying to get more diversity into robotics, or diversity into computing. And I think that that’s just so incredibly important for making sure we’re framing these questions in the right way to start out with. That when we talk about, you know, we’re going to build a facial recognition model, we take a look at the data that’s going into that and make sure that it’s actually what we want to model. We take a look at when we say, “here’s how you treat people,” we make sure that what we’re imitating isn’t already, you know, biased or otherwise, you know, bad or problematic. So I think that is part of the research problem. And that happens much earlier than you might think.
Josh Landy
You’re listening to Philosophy Talk. Today we’re thinking about social robotics with Elaine Short from Tufts University.
Ray Briggs
Would you want a socially intelligent robot in your home? Would it keep you company and improve your mood? Or could it get hacked and reveal your most personal data?
Josh Landy
The future of social robotics, plus commentary from Ian Scholes, the 60-Second Philosopher, when Philosophy Talk continues.
Can socially intelligent robots help us do what we want to do? I’m Josh Landy, and this is Philosophy Talk, the program that questions everything.
Ray Briggs
Except your intelligence. I’m Ray Briggs. Our guest is Elaine Short, from Tufts University, and we’re thinking about the social lives of robots as part of our series, “The Human and the Machine,” sponsored by HAI, the Stanford Institute for Human Centered Artificial Intelligence.
Josh Landy
So Elaine, you were saying earlier, you know, social robotics are still very much in their infancy, and I can’t wait to see all the things you’re going to discover in your career. What’s the most exciting thing we can look forward to in the next 10 to 20 years?
Elaine Short
Yes, so I think in the 10 to 20 year range, I’m really excited about all of the robots that going out into the world, you know, used to be, maybe you had a Roomba. But now we’re talking about everything from robots doing deliveries at hospitals to robot cars that are driving themselves. And that creates so many more opportunities for interaction with people. And I think we’re going to start seeing more and more social intelligence, more intelligence about people and their expectations from these real world robots that are out, you know, doing things that that make our lives better. I think one of the cool things about being in social robotics, and also, maybe one of the challenging things is, if I do my job, right, then you will not even necessarily notice that I did my job right. You’ll just think, “Wow, that’s a really great robot. It really understands me.”
Ray Briggs
So I love the promise of like, more chorebots that just do our chores so efficiently that nobody even has to worry about it. But I also worry a little bit about like robots that are sort of cuddly yet dangerous. So I’m thinking about things like, um, so the Boston Dynamics dog is extremely cute. And also, it could be used as a military weapon, and like, a cute murder weapon. And that seems sort of less encouraging. So do you worry about sort of potential for social robots to do bad social things, as well as good social things?
Elaine Short
I think you run this risk with a lot of different kinds of technology. I think it is important to make sure, like I was saying earlier, as a researcher, as someone who cares about doing ethical work that your robots in your designs, as much as possible, have built in that they aren’t doing harm, or at least they’re avoiding harm as much as possible. I’m not sure the problem of a cute cuddly social robot that then turns out to, I don’t know, do something terrible is all that different from any other kind of product safety problem. I think the most likely way actually these social robots are likely to harm someone isn’t, “Oh, you know, I nefariously programmed this robot to do something bad.” It’s more likely like, “Oh, I didn’t check the wires correctly, and now it’s overheating and it caught on fire.” So I think that’s maybe for in the near term, the more likely problem? I think the you know, bigger question of, you know, if we give these robots social intelligence, can they somehow infiltrate our lives and, and do terrible things? It’s something people are thinking about. So there’s actually research from another lab at Tufts that looks at the ways that interacting with a social robot can cause people to give up their security question answers. And, yeah, so that that would be an example. But you know, we already have artificial intelligence that does that. It’s just, you know, little Facebook polls and whatever, or hacking huge companies and just getting password lists. So I think it introduces some new kinds of problems. But it also just is another place where we need to be thinking about the kinds of things that we’re already thinking about in tech ethics.
Ray Briggs
Yeah. And I see the distinction between sort of intended harms or harms that are at least kind of important to the person designing the robot versus harms that are completely accidental. Do you think that there are like special risks of accidental harms that are particular to social robots?
Elaine Short
Well, I think you’ve earlier already identified one of the biggest ones, which is if you have a model that’s trained off of biased data, then you create robots that have bias. We do sometimes think about, you know, there’s interesting emotional harms that social robots can unintentionally inflict. For example, if your Roomba is really lovable, and then it breaks and you have to send it in for repairs, that’s actually pretty upsetting to people when their beloved home robot is broken. And if there’s one thing about robots it’s that they break a lot. So that kind of unintentional emotional harm is probably one of the things that is particularly unique to social robots because they elicit these social feelings and people can feel like they have a relationship with it. You know, in our work, would we have a robot we’re doing a study for example, that has an end date. We have to think about, “how do we take the robot away without hurting people?” So we might give a picture of the robot at the end, you know, that says, “it’s been great hanging out with you!”, that kind of thing. So that can help to mitigate some of those kinds of harms.
Josh Landy
You’re making me sad—it must be, for some, like the death of a beloved service animal. That’d be really tricky. I have a different kind of question for you, I have to say, I have to admit, I’m very excited by this technology. I mean, in so many different ways, right. So Susan’s point about stroke patients having a kind of coach, and then, you know, folks with dementia, having a kind of guide and monitoring, and the kids on the spectrum having a play partner who can help them develop certain kinds of social skills, and folks with depression getting a companion—I mean, this is all immensely exciting to me, as long as we can solve some of the problems we’ve talked about. My question’s about the the moral questions that come out of this. You might think, in some ways, it’s a moral blessing, in a sense, because, you know, we can reasonably ask robots to do things that would be unreasonable to ask the human—like, you know, as I said, to stay up 24 hours a day, seven days a week, to help us when we have a need, right, or something like that. Do you think there are any moral downsides, or other sort of moral issues that come up for you in this, in this field?
Elaine Short
I can think about downsides in a minute, but one of the things that I mentioned I’m interested in is disability, and I’m interested in disability rights. One of the things I think is really interesting about robots as assistive technologies is the ways in which they are not people, that having a personal care assistant can be very detrimental to one’s independence. And you basically end up in this weird situation where you either have to sort of ask a person to not be a person. So you say, okay, you know, you’re doing all these things for me, but I’m going to treat you sort of like an extension of my body and not think too much about you as having a lot of agency, which is problematic in one way. Or you end up in a situation where you are damaging your own agency, right, there’s this other person, and they’re doing things for me, and they’re making decisions about what I am, and am not allowed to do. And that’s problematic in a different way. And so the nice thing about robots is, it’s totally okay to have the robot not have its own agency or not have to be, in some sense, fully subservient to a person’s needs. And I think for disabled people that can be really powerful. So that’s something you know, maybe not so much on the moral challenge side. But that’s something that when I think about robots, and their sort of moral potential, that’s something that I think is really interesting, and I’m excited about.
Ray Briggs
That’s really cool, and it makes me want everybody to get the assistive robots that they need. Do you have thoughts about sort of how social robots could be sort of well distributed so that not only like, do some people who want a robot have one, but like the people who would they would help most get access to them?
Elaine Short
Yeah, that’s a really thorny problem, because we already have so many problems with distributing resources. Not everybody who needs, you know, a personal care assistant gets as much time with their assistant as they need. Not everyone who needs a wheelchair accessible van gets one. And I think, to me, this is a question of sort of advocacy, maybe outside of the research sphere. And it’s something I think a lot about and worry about, you know, our robots are, at the moment, very expensive. They’ve been getting cheaper rapidly, but the bigger robots, especially if you want an arm, and especially if you want an arm that can do something, those can be running, you know, tens of thousands of dollars to even, you know, $100-200,000. And that’s just not accessible from a financial perspective for most people. So I think on the technical side, we keep trying to make make the technology better and cheaper and more available on that side. And then, you know, on the policy advocacy side, right, we advocate for people to get the things that they need, whether that is some kind of robot assistance, or you know, everything else that a person needs to live their best life.
Josh Landy
I’ve one more question which is somewhat frivolous. I’m a big fan of “The Hitchhiker’s Guide to the Galaxy.” And in that sci-fi, they imagine robots with “genuine people personalities,” right? So you have a computer that’s overenthusiastic and one that’s the world’s biggest pessimist. Do you imagine, you know, in 20 years, 30 years, that it’s kind of in the ideal state that these robots would have different personalities, and they would be kind of matched to different kinds of folks who have different needs and preferences?
Elaine Short
Yeah, so I guess the the serious answer to that is that there’s a lot of researchers in social robotics in my field who do think about personality and how you give a robot personality, and how different robot personalities might match with different human personalities. The other answer I have or the other maybe comment I have, because you you talked about sci-fi, is I’m not a huge sci-fi person. I think that can make me in some ways a better roboticist because I am less disappointed by real robots than people who’ve been raised on the sort of sci-fi robots on your C-3PO. Even R2D2 has a level of social intelligence that we are not at with real robots. And when you talk about sci-fi robots, they often end up being these like stand-ins for people, either stand-ins for people with marginalized identities, that’s actually the original play where the word robot comes from, they use robots in that way. It’s a stand-in for the working class. Or it’s they’re these kind of weird thought experiments on what makes us human. What if you had a person with no emotions? Would they be human? And that’s really fundamentally different from how real robots are. And so when you’re talking about robots with personality, if you mean, are we going to have robots that behave in different ways? Yeah. Are we gonna have robots with personalities like human personalities? Probably not exactly. And you know, the more people expect that, sometimes it makes my job harder.
Josh Landy
Alright, I’ll try and lower my expectations. But on that, I want to thank you so much for joining us today. It’s been a fantastic conversation.
Elaine Short
Yeah, my pleasure.
Josh Landy
Our guest today has been Elaine Short, Professor of Computer Science at Tufts University, and author of more than 20 papers on human-robot interaction. So Ray, what are you thinking now?
Ray Briggs
Well, I think that was a really great sales pitch in a lot of ways for social robots. But I still like, I don’t expect them to be our friends, like we were talking about, AI can be really biased. So you have sentencing algorithms give a longer prison sentence to Black defendants than white defendants? And like, I wouldn’t be friends with a racist person, so why would I want to hang out with a racist algorithm?
Josh Landy
Yeah, and I’m super encouraged to hear Elaine talk about just that and about the work that’s being done by the experts to try to solve those problems. You know, it’s our fault as humans, right? We’ve been feeding the algorithms bad data. We don’t have to keep doing that.
Ray Briggs
Yeah, so how can we do better?
Josh Landy
Well, it turns out there’s actually going to be an HAI event on that very topic. And it’s free and open to the public, wherever you are. So everyone, welcome. It’s on Wednesday, November 17, at 9am Pacific time. For more information, go to hai.stanford.edu/events. And look for the virtual workshop on datacentric AI.
Ray Briggs
We’re gonna put links to that and everything else we’ve mentioned today on our website, philosophytalk.org, where you can also become a subscriber and gain access to our library of more than 500 episodes.
Josh Landy
And if you have a question that wasn’t addressed in today’s show, we’d love to hear from you. Send it to us at comments@philosophytalk.org, and we may feature it on our blog.
Ray Briggs
Now sometimes social but never robotic, it’s Ian Shoales, the 60-Second Philosopher.
Ian Shoales
Ian Shoales. Robots are surprisingly diverse, at least in pop culture. We have shy robots, robots from the future that want to kill us, and the butler robots, robots to perform surgery, fun little robot pals, sex toy robots. Also, there are many stories in movies about robots who want to become human, variations on the Pinocchio theme, I guess, except robots in these stories at least, either turn out to be more human than human or more psychopathic, or they’re all powerful like Gort in “The Day the Earth Stood Still,” tolerant of humans and as we step out of line, in which case well, what’s one less planet really? It’s like melting a hailstone in a summer storm. There are tropes — robots are superhuman, in some ways, strength, intelligence, but all too human and others, usually self image. They can’t have meaningful sex or appreciate food. This often leads to robot bitterness or sadness, like that movie “AI,” the weird amalgam of Stanley Kubrick and Steven Spielberg, that offered the incredibly depressing story of a little boy robot who just wants love and winds up being unhappy underwater for eternity. Spoiler alert. Oh wait, he’s saved by aliens at the end kind of. Hurray. Robots often exist, again at our cultural whim, in roughly the same place that migrant workers do. They are there to do the chores that we can’t or won’t do. Pick crops, serve drinks, drive cars, walk in space walk underwater. We have robo calls. We don’t think of those calls as robots, but they are, and they know how to push our buttons, help the wounded police officer [unintelligible], for example, robot does not care but knows how to make us care, or at least make a donation to the [unintelligible] or to pay attention to the expired warranty. Not to mention Siri and Alexa or Roomba, another robot vacuum cleans to patrol the house on their own. iRobot, the company that makes Roomba, according to Wikipedia, also makes little tank drones. AI apps that can control robot swarms, whatever that might mean, altering rescue and firefight vehicles designed to avoid all obstacles, reconnaissance robots that run underwater, and miniature robots little tiny robots designed to boost radio communication in battle zones. Whee. Despite all that, killer robots from space or the future are rapidly becoming passe. Bye bye Terminator. We are [unintelligible] prepared for a new world in which all the jobs will be done by robots. Only there might not be enough humans left because we didn’t have the sense God gave a ghost and refuse to take the vaccination they would have saved her life in the last pandemic. So now the robot vacuum cleaners roam sadly alone in a dust free house. But robots will be programmed to do something about their loneliness. So maybe in the future, they’ll come up with artificial humans to keep them company just have somebody in a robo call about that expired warranty, even though nobody’s actually driven a car since 2076. Robots did that for us, and we never went anywhere anyway. And then maybe hello Robo moms, who will be our algorithmically intuitive guide to the brave new immersive Meta world promised by Mark Zuckerberg. We just put on the headsets and carp about vaccinations and liberty his mama bought stuff us with nutrients from Amazon. That’s the dream. But I suspect that most robots however will never step into Meta World. That’s the domain of Zuckerberg, and to tell the truth, robots are a little bit afraid of the guy. They think he’s a robot in denial. Whoa, if true, I gotta go.
Josh Landy
Philosophy Talk is a presentation of KLW San Francisco Bay Area and the Trustees of Leland Stanford Junior University, copyright 2021.
Ray Briggs
Our executive producer is Tina Pamintuan. The senior producer is Devon Strolovitch. Laura Maguire is our Director of Research. Thanks also to Merle Kessler and Angela Johnston. Support for Philosophy Talk comes from various groups at Stanford University and from subscribers to our online community of thinkers. Support for this episode comes from the Stanford Institute for Human Centered AI.
Josh Landy
The views expressed—or misexpressed—on this program do not necessarily represent the opinions of Stanford University or of our other funders—not even when they’re true and reasonable. The conversation continues on our website, philosophytalk.org, where you can become a subscriber and gain access to our library of more than 500 episodes. I’m Josh Landy.
Ray Briggs
And I’m Ray Briggs. Thank you for listening.
Josh Landy
And thank you for thinking.
Guest

Related Blogs
-
November 12, 2021
Related Resources
Web Resources
“Data-Centric AI Virtual Workshop.” Stanford University.
Scassellati, Brian, et al (2018). “Improving social skills in children with ASD using a long-term, in-home social robot.” Science Robotics.
Short, Elaine, et al (2010). “No Fair!!: An interaction with a Cheating Robot.” IEEE Xplore.
Get Philosophy Talk
