Preserving humanity in the age of robots

Preserving humanity in the age of robots

00:00
13:16
REGINA BARBER: Human beings are hardwired to search for social connection. We naturally think of even the most basic objects as having feelings and experiences, which makes us feel attached to them, even if they're just a vacuum.

EVE HAROLD: I mean, there's people who name their Roombas. It's very, very common for Roomba owners to give a name and ascribe a personality to their Roombas.

BARBER: Eve Harold is a science writer and she was fascinated by this desire to connect and how it's driving the technology we build.

HAROLD: We have robots that express emotions. Of course, they don't feel the emotions at this point, but they act and look and move as though they do. And this triggers an emotional reaction in us, which is almost irresistible.

BARBER: Her curiosity about the technology is why she wrote a book called Robots and The People Who Love Them, about social robots or robots designed to interact with humans and other robots. They can do things like care for children, the elderly, even act as a friend. But Eve's book also explores the darker side of the field, some of the ways social robots might make us more lonely, more isolated.

HAROLD: There are men who actually have married holograms and anime figures, so they call themselves technosexuals. And they have long-term relationships. They see themselves as being faithful to their techno mates or girlfriends or wives.

BARBER: Which is alarming. But it's also, kind of, unsurprising because eve says that technology's already starting to change our perceptions of reality and may even change our perception of ourselves.

HAROLD: They listen. They learn from us. They remember what we say, and they respond in ways that are very exquisitely tailored to us and our preferences and our history with them. And over time, this effect really kind of snowballs until you get to the point where it's like a feedback loop. It's very easy to lose the realization that they're not actually alive and they don't actually have an inner life. It's all us talking to ourselves.

BARBER: So today on the show, as the line between human and robot begins to blur, how do we hold on to our humanity and use the technology ethically? I'm Regina Barber, and you're listening to Short Wave, the science podcast from NPR.

[MUSIC PLAYING]

BARBER: So, Eve, I think it's fair to say we're kind of in this like rise of robots right now. And I actually love robots. And I'm so interested in all of these advances that are happening with them. But I definitely find them unsettling at times when they're too human-like. So I love that one of the first chapters in your book is called "Overcoming the Uncanny." What is the "uncanny valley?"

HAROLD: Sure. So the "uncanny valley" is an emotional reaction that gets stirred in us when we interface with a robot that seems too human for comfort while not actually being perfectly convincing. So there's a glitch, or there's a timing issue. Or there's still a way to see that it doesn't quite make the mark of being totally human. And this is a disturbing thing for our brains because it makes you think about things like zombies and beings that were almost human, not quite human, almost alive, not quite alive and how terrifying that is to us. And robots can really evoke that.

BARBER: It makes complete sense to me, right, because I like cute robots, like, big eyes, big heads, things that look still metal. Like, I think that that is the perfect kind of robot.

HAROLD: I do too. And it's interesting because in the research with people who are interacting with robots, people actually like a robot that isn't too perfect. It puts them at ease if the robot makes a mistake every now and then. Because otherwise, their brain is confused about whether they're dealing with a living thing or a machine.

BARBER: Or we could even argue that living things and humans do make mistakes all the time, so it actually is more comforting.

HAROLD: That is a good point.

[LAUGHTER]

HAROLD: I'll give you credit.

BARBER: Right. Thank you. But so, OK, so your book also touches on the question of like robot consciousness, right? Like, so we're talking about, like, what makes this robot more human? Is it conscious? Like, consciousness is this big mystery in neuroscience. So how do we even define robot consciousness?

HAROLD: Well, here's the problem. We don't understand how consciousness works, even in humans, right? In the case of a robot, OK, intellectually, we know that up to this point, current robots do not have consciousness. But that's not something that we can hold in our minds for any length of time when we interact with them. And we imagine that they do have consciousness. It's just irresistible. We imagine that. But that's a little scary because nobody knows what the robot mind would be like. It's completely alien to us. So even the engineers who write their algorithms don't understand how the algorithms reach a certain decision. It's called the black box problem, and it's there with all AI. But I think there's a lot of ambivalence about whether we really want robots to be conscious because we can't really define what that consciousness would be like. Therefore, we can't predict what they might do.

BARBER: And in your book, you used a great analogy for consciousness from a neuroscientist, Christof Koch, about how we use computers to create weather simulations and make predictions. Can you expand on that analogy?

HAROLD: The analogy is you can run a computer program, you can do computer modeling of all kinds of phenomena. A thunderstorm, for example, you can model in a computer program, but nothing gets wet. That's how you know [CHUCKLES] it's not real. It's just a program. So I mean, I think we need to be really clear on this because it's an interesting question will robots ever be conscious. If they do ever become conscious, then we have to start thinking about robot rights and how we use them in addition to how will they behave, how can we predict how they might behave.

BARBER: Wow, but how do we interact with them ethically?

HAROLD: It's a very murky understanding that we have of robots and ethics. And there really aren't any real guardrails as far as ensuring that your robot behaves ethically all the time and incorporates human values into their behavior. When you talk about ethics, you bring up questions of responsibility. I mean, if a robot, for example, a caregiving robot, somehow accidentally kills or injures a frail person that they're taking care of, who's responsible?

BARBER: That's kind of horrifying. So regardless of their potential consciousness, some people do develop these really strong, like, social connections with robots. But we recently had an episode on loneliness in the US, and you write that robots aren't necessarily the solution. Like, what's the connection between robots and loneliness?

HAROLD: Yeah, it's a strange effect, and the thing that I can compare it to is people who are too addicted to social media and end up becoming isolated because they're not interacting with real people in a real relationship. And it's very seductive and hard to prevent in the people that have these relationships because of our hardwiring. So we have to remember, keep a firm fix in our minds of the dividing line between what is a robot and what is a human being. If we don't have that firmly fixed in our minds, we can start to prioritize our relationship with our robots because it's so easy. They cater to our whims. They talk about what we want to talk about. They are ver-- accommodating in every way, which is not something that human beings can keep up over time. So after a while, we could become so comfortable with these kind of virtual relationships that will cease to reach out to other people. And it's something that there are people in this world who are especially vulnerable to. People who are lonely and isolated and don't have enough social stimulation, they can actually lose what social skills they have because they're so accustomed to this kind of consequence-free, easy, appealing relationship with a robot.

BARBER: This is, like, making me think of human relationships and we don't like it when we're challenged, and we don't like it when relationships are hard. But like, if they aren't, then it's this, what you're saying. There's no-- it's one-sided.

HAROLD: If they aren't challenging, you're not growing. That's kind of the bottom line. And so you're not going to grow a lot from these robot relationships. However, there are people in the world, children with autism, for example, who have a very hard time developing social skills. And there are robots that have a special autism program to them. And they actually do teach rudimentary social skills to people on the autism spectrum. So, yes, on a very rudimentary level, they can actually help-- things like turn-taking, eye contact, things like that, very basic things can be taught to you by robots. But the thing is, you need to take those skills once you've developed them and transfer them to real people in order to keep growing.

BARBER: And this goes back to your writing about how some people even go as far as to say robot relationships are not only harmful psychologically, not these specialized situations that you're talking about but in general because they are emotionally one-sided but also that they're unethical.

HAROLD: It's unethical because the person is not connecting with other human beings. And the world is full of lonely people, people who have needs physically and emotionally. And if you're not connecting with other people, you're not part of the solution. And I do think it can be unethical just simply by displacing existing relationships with people in your life. And that concerns me a lot. I would say more than any of this that concerns me is the displacement of people and actually increasing the loneliness and the isolation that exists in the world.

BARBER: It's not a silver bullet.

HAROLD: It's not a silver bullet. It's not worthless either. For example, an older person who has dementia, a relationship with a robot can ease their loneliness and their sense of isolation, which is therapeutic. So yes, there are some good uses of this technology, no doubt about it. It's just that we need to know how far to take it and when to step away from the robot and start engaging in the real world, because that's where you're really going to grow those social muscles and those ethical muscles that you need.

BARBER: So after writing this book, like, how do you see robots fitting into our lives in the future?

HAROLD: Well, I think robots are going to change the culture and for better or worse but in some ways for the better. You have to remember that these robots that are going to be consumer household robots. They do more than just converse with you. They'll look up information for us. They'll send emails, and they'll do so many things for us that I think are wonderful and that will free us up to perhaps use our time in a more productive, more meaningful way.

BARBER: For more human interaction.

HAROLD: More human interaction, more education, more pursuing hobbies, all kinds of things, which a lot of these things we can't currently think of because our culture hasn't gone there yet.

BARBER: Thank you so much, Eve. This was really, really wonderful.

HAROLD: Well, thank you so much for having me.

� 更多英语听力见公众号【琐简】,回复"1"可进【打卡交流群】
以上内容来自专辑
用户评论

    还没有评论,快来发表第一个评论!