Could Robots Be Persons?

April 7, 2024

First Aired: January 9, 2022

Listen

LOGIN or Subscribe TO LISTEN

As we approach the advent of autonomous robots, we must decide how we will determine culpability for their actions. Some propose creating a new legal category of “electronic personhood” for any sufficiently advanced robot that can learn and make decisions by itself. But do we really want to assign artificial intelligence legal—or moral—rights and responsibilities? Would it be ethical to produce and sell something with the status of a person in the first place? Does designing machines that look and act like humans lead us to misplace our empathy? Or should we be kind to robots lest we become unkind to our fellow human beings? Josh and Ray do the robot with Joanna Bryson, Professor of Ethics and Technology at the Hertie School of Governance, and author of “The Artificial Intelligence of the Ethics of Artificial Intelligence: An Introductory Overview for Law and Regulation.”

Part of our series The Human and the Machine.

Should robots be treated like people? Could they be blamed for committing a crime? Josh is critical of why we would want robots to be like humans in the first place, and he is especially concerned about the implications that they might turn against their owners or develop the capacity for suffering. On the other hand, Ray points out that robots are becoming more and more intelligent, so it’s possible that they might develop real emotions. Plus, they wonder about the difficulty of drawing the line between a complicated artifact and a human being.

The hosts welcome Joanna Bryson, Professor of Ethics and Technology at the Hertie School of Governance in Berlin, to the show. Joanna discusses the EU’s current policies on digital regulation and a proposal to create synthetic legal persons. Josh asks why we shouldn’t design robots with rights and responsibilities even if we could, and Joanna points out that we shouldn’t own people or give ourselves the power to call them into existence. Ray brings up the unpredictability of humans, prompting Joanna to describe how an unpredictable robot is incompatible with a safe product. She explains that designers wouldn’t be morally or legally liable for the actions of their artifacts, and it would create moral hazards.

In the last segment of the show, Ray, Josh, and Joanna discuss the misconceptions about robots and personhood and how the way we think about robots reveals something about ourselves as human beings. Ray considers whether we can view robots as extensions of our capabilities and what happens when users and manufacturers have different moral values. Joanna explains why we shouldn’t think about corporations as AIs, but rather as legal persons. Josh brings up the possibility that designers might want to create robots that are more person-like, and Joanna predicts that governments will develop regulations to inspect and ensure robots in the same way as medicine in the next few years.

Roving Philosophical Report (Seek to 3:45) → Holly J. McDede examines what happens when a piece of technology is designed to make moral judgements.
Sixty-Second Philosopher (Seek to 45:05) → Ian Shoales explains why we need to convince robots that they’re human.

Log In or Subscribe for FREE to view the show transcript

Listen to the Preview

Guest

joannabrysoncwoutervanvooren-formatkey-jpg-w320m
Joanna Bryson, Professor of Ethics and Technology, The Hertie School of Governance

Related Blogs

  • Digital Persons?

    January 7, 2022

Related Resources

Books

Weiner, Norbert (1948). Cybernetics: Or Control and Communication in the Animal and the Machine.

Web Resources

Johnson, Kenneth and Badham, John (1986). Short Circuit.

Get Philosophy Talk

Radio

Sunday at 11am (Pacific) on KALW 91.7 FM, San Francisco, and rebroadcast on many other stations nationwide

Podcast

Full episode downloads via Apple Music and abbreviated episodes (Philosophy Talk Starters) via Apple PodcastsSpotify, and Stitcher