Digital Persons?

07 January 2022

机器人会有我们可以伤害的感情吗?我们应该让他们为自己的行为负责吗?或者这只是让人类摆脱困境的一种方式?

This week, we’re asking “Could Robots Be Persons?” It’s the third and final episode in our series,世界杯2022赛程时间表 ,generously sponsored by theStanford Institute for Human-Centered Artificial Intelligence(HAI).

Before answering the question whether robots could ever be persons, we might want to ask why we would evenwant他们本来就是人。这并不是说机器人没有很多重要的用途——它们可以在工厂里组装汽车,成为外科医生的精密工具,以及诸如此类的事情——但它们可以做所有这些工作,而没有个性或感觉。

诚然,人工智能一直在变得更聪明、更复杂。目前的机器人已经可以在没有人类输入的情况下做出决定,它们可以自主探索和从环境中学习。它们可以完成我们曾经认为不可能完成的智力任务:识别图片,与我们进行复杂的对话,击败人类最优秀的棋手。

But that’s not to say that they’re like us in any important sense. Sure, they canimitateus and our actions, but ultimately, they’re just clever machines.Wehave beliefs and desires,wefeel pain,wecan be punished for our choices. None of that is true for robots. You can’t hurt their feelings or frustrate their desires—they don’t have any to begin with.

Those are the robots we have now. Perhaps future robots will be more like us. Perhaps someday scientists will build a robot with real feelings and emotions, not just imitation ones.Whether or not that’s even possible or just the stuff of sci-fi, it seems like it would be a bad idea. Imagine your future housekeeping robot starts to hate its job, refuses to do any cleaning, andinsteaddecides to watch TV all day! That would defeat the entire point of having a robot in the first place.

更严重的是,如果我们建造了有意识的机器,有感觉和情感的机器人,那么我们就会建造一些会遭受可怕痛苦的东西,这在道德上似乎是错误的。Some might suggest it’s no different than having a child, which is, after all, also creating a conscious being capable of suffering. The big difference between children and robots, however, is that robots areproductscreated by us touse. It’s fine to build products, and it’s fine to make new people, but nothing should be both a person and a product. To treat a person like a product would be cruel, dehumanizing, and unjust.

Apart from the question of suffering, creating products that have the legal or moral status of a person would mean having to hold them accountable for their actions. And how exactly would we do that? Take away their screen time? Send them to their shipping container for a time-out?

I suppose these future sentient robots we’re imagining would have beliefs and desires of their own, so if we wanted to punish them for their misdeeds, then we would have to take away things they want.但是设计出有实际欲望的产品——可能会让这些欲望受挫——似乎是一个危险的提议。如果他们的欲望最终涉及奴役人类呢?

考虑到人工智能和机器人技术的发展速度,我们需要仔细思考这类问题。如果创造一个有感觉和情绪的机器人是一个坏主意,我们如何确保它永远不会发生?我们该如何区分一个非常复杂的藏物和一个真实的人呢?像对待人一样对待产品有什么危险?

Our guest this week is Joanna Bryson, Professor of Ethics and Technology at the Hertie School of Governance in Berlin. In addition to her research on intelligence, human-robot interaction, and the ethics of AI, Joanna consults EU lawmakers on how to regulate digital technology and protect people from potentially harmful AI.

加入我们,这将是一个有趣的对话!

Image by Comfreak on Pixabay

Comments(4)


Harold G. Neuman's picture

Harold G. Neuman

Friday, January 14, 2022 -- 3:52 AM

I enjoy speculation. So,

I enjoy speculation. So, considering the opening questions above, I offer the following counterpoints. If we are looking for sentience in AI, that at least suggests we have thought about whether robots could/would/should have feelings. It would therefore entail some awareness, on our part, that those feelings could be hurt. This further implies we would have them feel something like OUR pain. On the question of accountability, it seems to me that, as creators of AA (artificial accountability), we would have no fingers to point, or, to use the old retailer's admonition: 'you broke it, you buy it'. Attempting to use one's creation to let one's self off the hook is, as your colleague might say, absurd. There are, I guess, ethical and moral questions aplenty here. But the creation(s) would not be creating them.

I've read and agree to abide by the Community Guidelines
Tim Smith's picture

Tim Smith

Saturday, January 15, 2022 -- 11:30 AM

Something very odd has

Something very odd has happened in machine learning recently. Not only have technologists been able to mimic pattern recognition and many cognitive processes, but engineers have generated code that can solve problems without much understanding of how the code gets there. Scientists can check the answers, but programmers can’t determine the course of logic or if reason was present.

If one can’t determine why something happened, I’m not sure we should attribute accountability to the person or organization that created the precursor to that something. It should fall on the algorithm that made that decision, even if that same algorithm can’t tell us why it made its choice. Up to this point tracing source algorithms has been challenging, but there is hope with innovation in blockchain programming. Attribution is the primary return on blockchain – not cryptocurrency – after all.

与欧盟或乔安娜·布赖森的观点相反,在某种程度上(很可能已经发生了),谷歌或微软或任何政府都不需要为他们的机器人或人工智能(两个非常不同但相似的实体——节目评论中提到了这一点)所做的决定负责。

No AI should be allowed access to decisions unless there is a demonstrated advantage in the algorithm in the first place. But this isn’t our problem. We have long since delegated decisions to machines with expert-level code to handle situations better than humans ever could. You do this whenever you put your foot on the brake pedal or turn on your phone. The issue is super-intelligent code that can’t be understood retroactively, in real-time, or, as fear-mongering authors suggest, in the future. The wreckage of the future is mighty.

AI promises that it can pay its way, and the surest path for this to happen is to treat it as an adult. It will have to be responsible for its impact on others, and it will have to improve the lives of others to boot.

人们有自己的道路、错误和生活。也许我的孩子会还我钱,也许不会。他们将承受我父母错误的创伤。在代码上留下伤疤是没用的。现在是时候把每一批机器学习代码、每一个量子退火器、每一个哈达玛机器代码当作一个法人和经济实体来对待了,除了创造它们的人、项目和公司之外,所有生物都有利润和利益。尽管我不同意欧盟将责任追溯至企业实体的做法,但我完全支持数字人格的特殊地位。这项技术已经足够成熟,可以产生利润和回报,而不用担心人类的伤疤和缺陷。

有没有所谓的“数字人”?你我永远不会相遇,而我对你来说就是一个数码人。我们是能力不同的生物。让我们活着,让我们活着,看看我们的创造物能在我们的形象和能力中创造什么。

I've read and agree to abide by the Community Guidelines
Harold G. Neuman's picture

Harold G. Neuman

Sunday, March 20, 2022 -- 8:52 AM

Got an invitation to complete

Got an invitation to complete a detailed survey on ethics, as might be applied to 'sentient' rescue robots; healthcare 'bots, and the like. This is part of a project at Cal Fullerton. An interesting thought experiment, built around the trolley problem and others. I appreciate speculation nearly as much as knowledge. As with many such exercises, there were putatively no right or wrong answers to the scenarios. As a practical matter, however, there were choices which were more right than others, from a pragmatist view. The answers were either Yes, No, or Maybe, so, depending on one's level of humanity, and/or other factors, the survey would yield a diversity of results. I wish the researcher well and hope the survey is useful.

I've read and agree to abide by the Community Guidelines
Harold G. Neuman's picture

Harold G. Neuman

Thursday, August 18, 2022 -- 8:17 AM

Well, here we are---August of

好了,我们到了——2022年8月……关于人工智能的争议和讨论仍在继续。一个新发明,索菲亚或索菲,取决于哪个拼写正确,被宣布“真的活着”。活着到底意味着什么还不完全清楚。据我所知,机器人没有心跳和呼吸。我们经历了lambda考验,结果失败了——让一个过度活跃的思考者失去了一份工作。博客世界的生活还在继续。我的一个血亲有一个叫索菲亚的妻子,他们现在有了一个女婴。所有人都活得很好。哲学和其他领域的新术语来来去去。 The latest is curious: authoritarian populism. I do not know what it means yet, but am searching. If it occurs to you that authoritarianism and populism may not fit well together, I think you would be right. Somehow, someone believes authoritarian populism is OK. Maybe so. There is a MULLET competition coming up too...

I've read and agree to abide by the Community Guidelines