The Social Lives of Robots

Sunday, November 14, 2021

What Is It

Machines might surpass humans in terms of computational intelligence, but when it comes to social intelligence, they’re not very sophisticated. They have difficulty reading subtle cues—like body language, eye gaze, or facial expression—that we pick up on automatically. As robots integrate more and more into human life, how will they figure out the codes for appropriate behavior in different contexts? Can social intelligence be learned via an algorithm? And how do we design socially smart robots to be of special assistance to children, older adults, and people with disabilities? Josh and Ray read the room with Elaine Short from Tufts University, co-author of more than 20 papers on human-robot interaction, including "No fair!! An interaction with a cheating robot."

Part of our series世界杯2022赛程时间表 .

Listening Notes

机器人能学会阅读社交线索吗?有可能从算法中获得同理心吗?雷质疑社交智能机器人对人类已经做得很好的工作的必要性。此外,她对数字和计算数据如何创造社交智能这样复杂的东西持怀疑态度。然而,Josh认为,机器人学习如何阅读社交线索是有用的,因为它们必须在不同的环境和空间中导航。

The philosophers are joined by Elaine Short, Professor of Computer Science at Tufts University. Elaine’s work focuses on robots that help and learn from people, as well what happens when robots exit the lab and enter the world. Ray asks if robots are truly learning social intelligence or if they’re simply simulating humans, but Elaine considers the distinction to be unimportant in her field. Josh asks about the success of companionship robots, which leads Elaine to describe the success of animal and zoomorphic robots. She believes that humanoid companionship robots will still take time to develop, especially since a large problem in social robotics lies in managing human expectations.

In the last segment of the show, Josh, Ray, and Elaine consider the tension between popular science and sci-fi representations of robots versus how they actually operate. They look at various weaknesses of socially assistive robots, such as their potential to make mistakes, accidental emotional harms, accessibility, and high costs. Elaine emphasizes the importance of increasing diversity in robotics and computing, and she explains how assistive robots can aid in disability rights and empowering people with disabilities.

  • Roving Philosophical Report (Seek to 4:27)→ Holly J. McDede discusses how a robotic bee and a robot designed to help kids with autism spectrum disorder are impacting the social lives of their respective communities.
  • Sixty-Second Philosopher (Seek to 49:01)→伊恩·肖尔斯分析了流行文化中机器人的多样性和比喻。

Transcript

Transcript

Josh Landy
Could a robot ever really understand you?

Ray Briggs
它至少能帮助你的孩子和其他人一起玩吗?

Josh Landy
Would you trust a robot to take care of your child?

Comments(11)


Harold G. Neuman's picture

Harold G. Neuman

Saturday, October 2, 2021 -- 12:19 PM

They need not have social

他们不需要有社交生活。因为他们不认为……不管别人怎么跟你说。人工智能是发明。图灵会告诉你的。实际上,他做到了。但没人注意。

I've read and agree to abide by the Community Guidelines
Tim Smith's picture

Tim Smith

Thursday, October 7, 2021 -- 7:11 PM

Turing said and did many

Turing said and did many things and was ignored for various reasons, but to characterize his view of AI as a contrivance and that no one noticed is like saying Barack Obama was a racist and Donald Trump loved his country and we all missed it. All these statements are valid, but all are equally untrue.

不过,这是一个很酷的小子线程。艾伦·图灵赢得了二战,是上个世纪最伟大的哲学家、数学家和人类之一,PT应该为他做一期节目。

I've read and agree to abide by the Community Guidelines
Harold G. Neuman's picture

Harold G. Neuman

Sunday, October 3, 2021 -- 7:34 AM

I think professionals get so

I think professionals get so wrapped in their disciplines, they are not able (willing?) To separate reality: 'how things probably are' from fantasy: ' how they might possibly be'. Sure, I tend to think outside the box and/or jump out of the system on a few things myself. It is good mental gymnastics;helps keep me sharper in advanced years. But, when we talk of notions such as computer social skills---or the lack thereof, we are in the realm of What Does it Matter?, not how things probably are. This is more Asimov than is either practical or pragmatic. Is it useful to speculate on the improbable? I don't think so. Artificial Intelligence is making differences-this much is inarguable. Mental gymnastics are good therapy. And, idle chat IS a social skill, banal as it can often be. Computers are marvelous tools, SKA, algorithims. I would, however, rather play chess with another human being.

I've read and agree to abide by the Community Guidelines
Tim Smith's picture

Tim Smith

Thursday, October 7, 2021 -- 9:13 PM

Exoskeleton tech can get

Exoskeleton tech can get people with paraplegia mobile. Assistive robots have infinite patience for autistic kids to learn from and play with even. Eldercare robots give failing parents peace of mind. Medical rescue robots provide people with epilepsy dignity and timely care. There are way more applications than this for robots. These roles call for social skills and sensitivity to perform well. Philosophers must consider the limits and possibilities.

先进的神经形态芯片正在研发中,它将添加神经网络单元,以处理线索,从而检测癫痫发作、记录抑郁症,并找到与年轻的自闭症患者互动的途径。这些和许多其他的任务都是可以完成的。只需要几次失败,人们就会对解决这些高需求用例的技术失去信任。

I agree Robots don’t need to be social like humans, but they need to respond to human emotion and traits (some of which humans might not even perceive.) The most dangerous thing a robot could do would be to imitate humans, perhaps. Sometimes you want a robot to act and feel like a human; in general, robots should communicate socially with humans that they are not themselves humans. If that seems odd, it won’t very soon. Robots are becoming more and more human. That, I agree, is a problem.

I've read and agree to abide by the Community Guidelines
Harold G. Neuman's picture

Harold G. Neuman

Wednesday, October 20, 2021 -- 5:02 PM

Tim:

Tim:
I do not often look to film or popular literature for philosophy.. But, I was moved by I Robot. The hi-tech Audi was neat, too.

I've read and agree to abide by the Community Guidelines
Tim Smith's picture

Tim Smith

Wednesday, October 20, 2021 -- 5:55 PM

Harold:

Harold:
“模仿游戏”值得图灵去思考。

I've read and agree to abide by the Community Guidelines
tinkwelborn@mac.com's picture

tinkwelborn@mac.com

Tuesday, November 2, 2021 -- 11:06 AM

Nice banter, here.

Nice banter, here.

I've read and agree to abide by the Community Guidelines
Daniel's picture

Daniel

Saturday, November 6, 2021 -- 6:14 PM

和信息!Not clear

和信息!我不明白为什么没有更多的学者利用这个优秀的资源,我把它描述为知识交流的公共效用。无论如何,社会化机器人的概念似乎与人工智能密切相关,人工智能是一个因其模糊性而引人注目的术语。如果智能包含对相关选项的辨别能力,例如,运动传感器发出的信号是一个人的身体进入一个结构,而不是被风吹进来的一片树叶,那么人工智能和自然智能之间就没有可检测到的区别。然而,如果就像论坛参与者Smith在他10月7日的文章的第一段中所说的那样,人工智能补上了正常智能的空缺,那么它就是后者的延续,因此只能通过程度来区分。但我认为,许多人对这种技术如何在我们的社会中使用的担忧是非常重要的。也就是说,人工智能是一种天然愚蠢的消费产品。自动驾驶汽车。相对于美国现有条件,自然智能的一个产物将是通过国家层面从私人交通到公共交通的快速转变,减少交通拥堵的经济外部性和碳氢化合物燃烧对大气的破坏。只有在自然智能在其自愿部署中没有发挥作用的情况下,将人工操作的私人交通代表为一种智能才可能是合理的。

I've read and agree to abide by the Community Guidelines
Devon's picture

Devon

Wednesday, November 17, 2021 -- 4:35 PM

Fanya in Aptos, CA sent us

Fanya in Aptos, CA sent us this question:

"Would you please ask the guest whether she's thought about using what we know about mirror neurons in her research to get a robot to model human beings and respond to humans?"

Unfortunately we received it just a bit too late to include in the program, but our guest Elaine Short sent this response:

"HRI researchers are often inspired by human cognition, including mirror neurons! In our work, that might look something like modeling how the robot would do a task while watching a person do that task. So a robot trying to figure out how to help someone put groceries away might make a plan for how to put away the oranges, which the robot is holding, AND a plan for how to put away the pasta, which the person is holding. It can then use the plan it made "for the person" to either detect when something surprising has happened ("The person put the box in the freezer! Maybe it was a box of mochi, not a box of pasta") or to make better decisions about its next actions ("The person is going to put the pasta in the cabinet, which means opening the cabinet by the fridge, so I should wait before I try to open the fridge")."

I've read and agree to abide by the Community Guidelines
Tim Smith's picture

Tim Smith

Thursday, November 18, 2021 -- 9:50 PM

Mirror neurons were

Mirror neurons were discovered in macaques. The only problem is … macaques don’t imitate. The fundamental papers on this were straightforward and unassuming. Later, mirror neurons were proposed to explain language, theory of mind, and imitation based loosely on a debunked theory of motor learning in speech.

An excellent book that explains this is ‘The Myth of Mirror Neurons' by Gregory Hickok. I read it a few years ago and can’t remember all the points. Still, he pretty much squelches any hope of using the mirror network to do anything other than help explain rudimentary motor/action function, which may be the use case here.

There are lessons to learn from human consciousness. These are not lessons for robots or artificial intelligence, for the most part.

正如节目中提到的,更常见的问题是,人类认为机器人是社会性的,它可以学习并接近人类的期望。正如伊莱恩所说,这将需要优秀的老式人工智能来完成这一点,也许是所有的点。机器人身上有太多的优点,不应该浪费时间去实现虚假和不切实际的期望。目前,我们应该关注高需求用例的安全性、易用性和易于自动化的辅助算法。

I've read and agree to abide by the Community Guidelines
Harold G. Neuman's picture

Harold G. Neuman

Sunday, January 16, 2022 -- 1:53 PM

Whoa, now. This choice of

Whoa, now. This choice of subject matter implies that not only could robots be people, they are. A bit premature, don't you think? I am not there yet. I might countenance a notion of life extension through improved circuitry if only I could see some percentage in it. But I am not so self-indulgent (or self-assured) to believe extension of my life would make a difference that made a difference. That would amount to loathsome egomania---which we have seen more than enough of. I like to speculate about an array of topics. I even like dancing robots. Humans are social animals, when they choose. So are cats. Society; social; & socialization are words and concepts that fit certain life forms more precisely than others. Unless the definitions of the words and concepts have changed, they do not fit machines---even sophisticated ones. (Please. Make no mistake. AI and robots are machinery.) Those who elect to use such words and concepts in discussions around AI, robots and so on, need both a faster horse and better cart to put before it.

If this is absurdity, I'll take it over contextual reality. The next big thing is notoriously illusive.

I've read and agree to abide by the Community Guidelines