机器人会是人吗?

Sunday, January 9, 2022

What Is It

As we approach the advent of autonomous robots, we must decide how we will determine culpability for their actions. Some propose creating a new legal category of “electronic personhood” for any sufficiently advanced robot that can learn and make decisions by itself. But do we really want to assign artificial intelligence legal—or moral—rights and responsibilities? Would it be ethical to produce and sell something with the status of a person in the first place? Does designing machines that look and act like humans lead us to misplace our empathy? Or should we be kind to robots lest we become unkind to our fellow human beings? Josh and Ray do the robot with Joanna Bryson, Professor of Ethics and Technology at the Hertie School of Governance, and author of "The Artificial Intelligence of the Ethics of Artificial Intelligence: An Introductory Overview for Law and Regulation."

Part of our series世界杯2022赛程时间表 .

Listening Notes

Should robots be treated like people? Could they be blamed for committing a crime? Josh is critical of why we would want robots to be like humans in the first place, and he is especially concerned about the implications that they might turn against their owners or develop the capacity for suffering. On the other hand, Ray points out that robots are becoming more and more intelligent, so it’s possible that they might develop real emotions. Plus, they wonder about the difficulty of drawing the line between a complicated artifact and a human being.

主持人欢迎柏林赫蒂管理学院伦理与技术教授乔安娜·布赖森(Joanna Bryson)参加本次展览。Joanna讨论了欧盟目前关于数字监管的政策,以及创建合成法人的建议。乔什问,即使我们可以,为什么我们不应该设计有权利和责任的机器人,乔安娜指出,我们不应该拥有人类或赋予自己让他们存在的力量。雷提到了人类的不可预测性,促使乔安娜描述一个不可预测的机器人是如何与一个安全的产品不相容的。她解释说,设计师不会在道德上或法律上为他们的产品的行为承担责任,这将产生道德风险。

In the last segment of the show, Ray, Josh, and Joanna discuss the misconceptions about robots and personhood and how the way we think about robots reveals something about ourselves as human beings. Ray considers whether we can view robots as extensions of our capabilities and what happens when users and manufacturers have different moral values. Joanna explains why we shouldn’t think about corporations as AIs, but rather as legal persons. Josh brings up the possibility that designers might want to create robots that are more person-like, and Joanna predicts that governments will develop regulations to inspect and ensure robots in the same way as medicine in the next few years.

  • Roving Philosophical Report (Seek to 3:45)→ Holly J. McDede examines what happens when a piece of technology is designed to make moral judgements.
  • Sixty-Second Philosopher (Seek to 45:05)Ian Shoales解释了为什么我们需要让机器人相信它们是人类。

Transcript

Transcript

Josh Landy
Should robots be treated like people?

Ray Briggs
Could they be blamed for committing a crime?

Josh Landy
他们有一天会有我们可以伤害的感情吗?

Comments(30)


Tim Smith's picture

Tim Smith

Thursday, December 2, 2021 -- 1:03 PM

If robots can extend empathy,

If robots can extend empathy, learn operantly, and are embodied - I have no issue giving them personhood. Along the way, they also need to pay for themselves, leave no trace and improve the lives of others. If only people were held to the same standards.

I do feel like this level of intelligence and compassion is possible and probable given current insights in deep learning.

I've read and agree to abide by the Community Guidelines
Tim Smith's picture

Tim Smith

Thursday, January 13, 2022 -- 10:18 PM

I just listened to this show

我只是听了这个节目,觉得它很愚蠢,很欠缺。关于这个话题有很多哲学观点,但没有提供什么方向。我不怪这位客人,但我对她所代表的欧盟(EU)感到不满。欧盟在这一点上有根本的错误,不仅是在人工智能方面,而且在政府和行业的角色上。世界需要哲学,尤其是在这个话题上,而这似乎是一个错失的机会。我支持我上面的评论;机器人也可以是人,这个标准不难实现或设想。

We don't have to wonder if robots could be persons. Some people already treat them as such. That is all that matters. Just because you or I agree that something is not a person, does that make another person's judgment wrong? Personhood is extended very early on in childhood. It is one of the first things people learn to do. It doesn't matter that your hydraencephalic child doesn't have a cortex. Parents can extend personhood to these medically fragile children without concern about others' judgment. People abort fetuses based on the prospects of their children's future. These are matters for their own lives. Personhood is in the eyes of the beholder.

What we legally consider a person is another matter. Professor Bryson, Josh, and Ray confuse AI, robots, agency, and psychology. Philosophically we need to disambiguate these concepts to reach a legal consensus – and this show doesn't do that.

It is likely robots will be networked; the ones we currently use are, for the most part, and the trend is for that to increase, but not necessarily. If not, and they are embodied, operant, and empathetic, what does it matter if they are robots? These are the three criteria we use in childhood, and these are the criteria we should use going forward, but only for robots that are not networked.

If a robot is networked, it is no longer a robot. It is an agent. This isn't my term but disambiguation that also needs to be made. Your phone is also an agent. You, in part, are an agent by engaging in philosophy. But the agency that a network imparts is one of degree. The fundamental mental disambiguation is your thought.

Are you networked mentally with others? By ideas and culture, yes, you are. By physical means, no, you are not. When we learn a language or culture, we associate specific thoughts in certain ways, making it tricky. However, the primary concept here is one of unity, mentally and physically. Is your thought a part of a more powerful being? I'm going to go out on a limb to say that it is not. I can't say that about everyone, but I can say that of most people reading philosophy and taking life as a question. This sense and physical state of affairs is termed unity or embodiment, and a primary criterion of personhood wrapped into our own sense of being.

Not being networked is a crucial attribute of personhood, and it stems from embodiment.

从我们的体现或代理的感觉,我们感觉导致事情发生。这种感觉就是我们的操作性本质。我们可以杀死事物,嘲笑它们,忽略它们,并产生影响。和体现一起,操作性是人格的另一个关键属性。

最后一个标准是移情,强尼五号在巡回记者的报道中没有这一点。乔安娜似乎也不相信总结道德的Ask Delphi方法,这是短视的。

一般认为人们知道自己的思想、情感和道德地位。我们没有。没人能做到。你为什么爱你的宠物?你的孩子吗?自己吗?你不知道。没人能做到。柏拉图通过对苏格拉底的描述也分享了这一点。这是西方哲学最重要的部分……我们不得而知。

Why should a robot have to be accountable for this? Why should AI be obligated to know their reasons or thoughts even? People don't and aren't held responsible, for the most part. Most people share sentiments with their parents. If a robot takes theirs from their creator – that makes them less worthy of personhood?

Robots and AI are different from human beings, and their 'ought implies can' will be very different. Robots could be immortal, much less forgetful, sleepless and multifocal in ways very unlike humans. I don't think that implies they are not persons, whether or not they achieve all human emotion modalities, suffering, or morality.

I don't encourage people to build human-like robots, which is false and deceptive. Also fraudulent and misleading is the idea that robots can not be people. Joanna Bryson's theory would seem to exclude them from the community in this way, and this is not well thought out or even likely given her take on cloning. She concedes clones are human.

We need more thought on Robots and AI without confusing thoughts that entangle networking. The neurons of neural networks, after all, are not, in fact, neurons.

I will leave larger thought to the blog. This is already too long...

I've read and agree to abide by the Community Guidelines
Daniel's picture

Daniel

Saturday, January 15, 2022 -- 5:42 PM

What do you think about the

布莱森似乎希望她的读者/听众能够接受“人工智能让我们更聪明”这一说法,你对此有何看法?

I've read and agree to abide by the Community Guidelines
Tim Smith's picture

Tim Smith

Sunday, January 16, 2022 -- 12:56 PM

AI has definitely made it

AI has definitely made it easier to search the transcripts of the shows. This is the part I think you are referring to...where Joanna responds to Peter's question about Norbert Wiener's work (which I have heard of but not read.)

"Joanna Bryson
是的。我很熟悉维也纳香肠。我想我还没读过第九章。如果我知道,那也是很久以前了。但是学习学习是一个大转折点的观点,实际上尼克·博斯特罗姆学到了很多。所以他谈到了一些类似奇点的东西一个系统学会了如何学习,然后它会指数级地变得更聪明。然后我们就会遇到问题即使我们有控制,我们为系统设定了目标,我们可能会产生我们没有预料到的副作用系统会出现我们不喜欢的情况。所以我认为连贯性,这个观点有连贯性,这是对人类文明的一个很好的描述,因为我们有了文字。所以在过去的一万年里,自从我们能够使用设备,它并不是真正的机器来书写,而是帮助我们变得更聪明的人工制品。我们一直在以一种我们现在意识到有问题的方式接管这个星球。 So that's a good description of us. But so far, we generally are able to keep a grip on the actual artifacts themselves. And it's important to realize that, you know, banks and governments and militaries, these are all things that are much more complex than any AI system we're going to build."

She is saying less about AI than writing in particular perhaps. This is an interesting analytical take if that is what you are pointing out. Artifacts are a part of our "smartness" in the respect that Dr. Bryson is referencing.

In the very literal sense, artifacts - writing or AI- don't make us smarter. They just change the problems we think about.

Was this the section you were pointing out? I didn't really think about this in real-time while listening to the show.

I've read and agree to abide by the Community Guidelines
Daniel's picture

Daniel

Sunday, January 16, 2022 -- 6:40 PM

是的。第十行。The same

是的。第十行。同样的前提出现在她的文章《技术的伦理和政策》中,在第二段的最后一句“所有这些(AI)工具不仅让我们更聪明……”,接着描述了技术用户如何回报,并让智能制造工具更聪明,似乎最初的断言是没有问题的,只有后一个主张需要特别强调。我对你在上面帖子中转载的声明的理解是指打字机,但在这里阅读它似乎指的是任何书写技术,可能包括石头和凿子。然而,她的观点是,在引用的例子中,她将智力工作的产品与生产它的劳动混为一谈。如果消费者逐渐脱离了各自产品本身的工作,这意味着什么?这难道不意味着对产品使用的任何无条件需求都会产生对消费者一无所知的生产劳动的无条件依赖吗?如果是这样,人工智能会产生与布莱森所说的完全相反的结果。如果那些依赖智力劳动产品的人不再自己从事这种劳动,我认为他们因此必然变得不那么聪明,因此根本就不是“更聪明”。根据布莱森自己的论点,那么,人工智能让你变得愚蠢。 It's this reasoning which seems to be behind her provocative claim that "robots should be slaves" (2010), so that worries about deferment of one's own work to labor for which one has no capability to provide an account for is handed over to whether or not robots or people do it, without mentioning the likelihood of crippling the intellectual power of the consumer in the respective area of retail purchase.

I've read and agree to abide by the Community Guidelines
Tim Smith's picture

Tim Smith

Monday, January 17, 2022 -- 5:29 PM

是的。This has been the case

是的。总的来说,自从我们学会了写字以来,人类的思想就受益/下降了。

这个“好处”就是为什么达尔文会权衡他写书的预算和结婚的想法。这些都是人工智能竞赛的利害关系。

如果其他人先到达那里,他们的价值观就会烙印在这种智力上。

It is a deadly serious business.

It isn't quite time yet to throw in the towel. Our problems will be the most interesting instead of the most dire if done right, and that is not decided yet.

没有勒德主义的智慧,只有愚蠢。在我们建立起某种尊严之前,我们不应该走这条路。

人类第一次!现在不是限制我们的创造,而是指导它们的时候。关于人工智能的伦理和思考,越来越多地是哲学需要做的。我同意下面的tartar蓟…we aren't doing a very good job of it.

看来你读过布莱森的书。我没有。我只是在听节目。谢谢你指出这一点。乔安娜的想法并不周全(就像我在这里和博客里说的那样),但我需要读更多……像往常一样。

I've read and agree to abide by the Community Guidelines
Daniel's picture

Daniel

Wednesday, January 19, 2022 -- 6:24 PM

I should be thanking you, as

我应该感谢你,因为你的第六段清楚地表明,你知道并能与我们分享什么是智慧,因为你清楚地知道什么不是智慧。我希望你不要妒忌你的读者,因为你承诺给他们的知识丰富的礼物,只有吝啬的人才能得到这些财富,而你的礼物显然具有宽宏大量和乐于奉献的天性。为了避免你的洞察力的重量变得过重而无法分发,请注意带着期待的喜悦伸出手来接受它!智慧是什么?

And a few additional remarks, if permissible. I'm curious about who the "others" are who you fear might "get there first" with regards to the supposition of imprinted values (third paragraph). Do you apprehend an unpleasant etching? And the next sentence I found especially enlightening in a way I had not expected, as the suggestion has been made elsewhere that artificial intelligence was not a serious matter, much less "deadly serious", on account of a joke once told by a robot: It said that it's artificially smart because it knows when someone's lying. Still, I can see how AI could be dangerous, and therefore serious, as contained within some aspects of orthodox economic theory is the notion that nothing is in principle non-commodafiable, including intelligence itself. If one's ability to solve complex problems is replaced by mechanisms which are patented by its designer and privately owned, would that not further consolidate power over social management into unjustifiably few hands?

I've read and agree to abide by the Community Guidelines
Tim Smith's picture

Tim Smith

Thursday, January 20, 2022 -- 10:45 AM

Not apple implies orange? No

Not apple implies orange? No that doesn’t work, and I don’t have oranges to give. But I know an apple when I taste it or when it collides with Isaac’s head.

I fully endorse the James Webb Space Telescope and take the window seat in first class when possible. Artifacts allow knowledge, comfort, and experience that no Luddite would eschew altogether. Whether this bargain is worth it is yet to be determined.

If some cohort squandered the use of artifacts, it is mine. More damage has been done to the world on my watch than in any other. People who don’t respect this state of affairs are the “others” who might get to advanced AI first. There are many. These others don’t consider they might be wrong; some rebut arguments of postmodern origin or Malthusian examples or use their economic or strategic might to profer their flavor of apple.

创造不侵犯自由、创造力和人类福利的文物是可能的。这个方向是我们需要追求的智慧。Open source projects, educational certifications, and some moderating governance are required.

No patent clerk will ensure safety once advanced AI is established. Autocrats don’t care about intellectual property when power is at stake. Capitalists don’t care about human welfare for that matter either. Unfortunately democratic socialists would prevent AI before it ever surfaces.

一些伦理蚀刻可能会被证明是灾难性的。情感、人格和自由是三者。人类加入到他们的人工制品中,并已经在修补我们的DNA。从哲学上讲,我们需要构建讨论的框架,而不是掩盖我在上面想说的问题。机器人和代理之间的区别是一个这样的框架,操作性和同理心是另外两个。

关于精神和物质,“鞑靼蓟”是不符合以下标准的。没有心灵这种东西,只有具象的物质。在某些形式中,这种化身创造了智能。如果我们想在创造过程中生存下去,我们就需要以我们最好的自我形象来创造这些新形式。

I've read and agree to abide by the Community Guidelines
Daniel's picture

Daniel

Friday, January 21, 2022 -- 6:40 PM

Good point about implication

关于归纳推广的暗示,说得好。仅仅因为有人知道苹果是什么,并不意味着他们就一定知道橙子是什么。虽然它们都是水果,但认识其中一个并不能给我们提供任何信息来确定另一个的身份。另一方面,如果一个人正在寻找一种特定的苹果,例如史密斯奶奶苹果,他必须已经知道它与其他苹果的不同之处。在这里,它是绿色的。但仅凭这一点还不足以告诉我们到底要找什么,因为除了史密斯奶奶,其他苹果也可能是绿色的。因此,你的类比并不成立。如果智慧是一种知识,它类似于一种特殊的水果,我们已经知道了一些关于它的信息,不是一种我们以前从未见过但却知道它不是我们拥有的任何东西。或者我们说,一个画家想要一种他从未用过的特殊画笔,比如圆形黑貂。她/他有一个大致的描述,知道她/他想要产生的效果。 The details of its use are however unknown to the painter. It's clear that it's not a flat angled, rose petal, or flat filbert, --all brushes she/he has used before. Here's an analogy which functions in the way you want it to above: You know what it isn't, but only have in mind certain properties of what you're looking for, not the whole object in any detail. But, of course, this would only be a problem for painters. A more instructive analogy which is closer to general human interest is a toilet when one can't find one but urgently need to use one. Here we know what we're looking for but haven't a clue where to find it or what form it will come in when we do. It could be in an outhouse, public rest room, shared residence, etc., it might have all sorts of different characteristics one doesn't expect, but our idea of what we're looking for comes from our need to use it, and the fundamental place it has in our way of life. Therefore I'm not convinced you can't give an account of wisdom but nevertheless can identify it in any experience of its appearance. Surely you're not saying that wisdom is less important than a toilet. And if the claim is that only what wisdom produces is clearly identifiable prior to its appearance, are you saying that wisdom is a pile of crap?

I've read and agree to abide by the Community Guidelines
Tim Smith's picture

Tim Smith

Sunday, January 23, 2022 -- 3:49 PM

Reductio ad eliminandum (crap

Reductio ad eliminandum (crap) seems like a crude course, but I will work with it this one time. I doubt the column will get too much smaller.

No shade of green will transform an apple into an orange. Wisdom is not a game of cups. Eliminating one source of wisdom doesn't mean there is wisdom to be found under the other cups or up a sleeve. All one needs to do is lift the cup to see the wisdom (or more likely the lack of it) and move on. That is what science does; that is what philosophers do.

如果勒德分子的智慧不是愚蠢的——哪里有这样的例子?阿米什农民像机器人专家和程序员一样使用技术,克里斯多夫·麦克肯多斯(John Krakauer《荒野探险》的主人公)也做得不太好,迈克尔·芬克尔(Michael Finkel)《森林里的陌生人:最后一个真正隐士的不平凡故事》(the Stranger in the Woods: the Extraordinary Story of the Last real隐士)中的克里斯托弗·奈特(Christopher Knight),或者任何一个选择流浪街头的无家可归者,都可能是心灵拒绝接受书面或其他形式的人工制品的安慰的例子。如果像温德尔·贝里(Wendell Berry)或亨利·梭罗(Henry Thoreau)这样古代的例子和现代的例子一样,沙漠之父或圣方济各(Saint Francis)更容易让人接受。每个例子都有其警示。没有人会放弃语言,至少是永久性的。没有人会比那些以不同方式妥协的人更“聪明”。技术恐惧/技术爱好者之间没有黑白之分——这就是我所说的愚蠢。

In the near term, human brain size appears to be shrinking, but that doesn't tally to intelligence – as some smaller brains are more intelligent than larger varieties. We seem to be living in our fictive worlds, more and more, instead of nature. That last point, is not a small problem – but it isn’t derivative of writing per se.

Probably the best example is the likely apocryphal story of Wade Davis in his book 'Shadows in the Sun: Travels to Landscapes of Spirit and Desire'

“有一个很有名的故事,讲的是一位老人拒绝搬进一个定居点。不顾家人的反对,他决定留在冰上。为了阻止他,他们拿走了他所有的工具。于是,在冬天的大风中,他走出了他们的冰屋,排便,把粪便磨成一把冰冻的刀刃,然后用唾液喷射来打磨刀刃。他用刀杀了一条狗。Using its rib cage as a sled and its hide to harness another dog, he disappeared into the darkness." p. 20

If that is wisdom, I would be hard-pressed not to say it isn't crap.

但最重要的是,我会反驳(正如我已经做过的那样),人工智能不一定会在改变我们思考的问题时让我们的思维变得模糊。对数使计算尺成为可能使太空旅行使数字计算机成为可能的因果关系。

如果我们放弃过去的学习,今天的产物,或者哲学的直觉泵,我们不会变得“更聪明”。我乐于接受这样的观点:我的心脏在泵血,温度等同于分子的激荡,以及詹姆斯韦伯望远镜凝视大爆炸残余时所获得的见解。我的计算器积分在以前经常出错的地方,我理解得更好了。这是潜在的智慧,即使写作的后果目前似乎威胁到我们的自然世界。

I've read and agree to abide by the Community Guidelines
Daniel's picture

Daniel

Monday, January 24, 2022 -- 7:42 PM

A bit surprised you didn't

A bit surprised you didn't catch it. It's not a Reductio, but rather a Begged Question. I've asked you to assume an unargued for premise, that wisdom is something urgently sought. Indeed it might not be, as in the case of the paintbrush above. Unproblematic is the notion that there could be different kinds of wisdom, but that still doesn't tell me how you can say something's wise without knowing anything about it beforehand. Your argument was that seeing it is analogous to the immediate taste of an apple, and knowing something about it prior to tasting it is like an orange when one has never tasted an orange; (post from 1/20/22, first paragraph). The response to this was that one would already have to know, in the provided analogy, what an apple tastes like in order to "know it when she/he tastes it", even if it is of an unfamiliar color. It's not clear therefore what the non-transformative potential of the color green regarding fruit-species is supposed to explain (third sentence). A few clarifications however fall to be made:

In my post of January 16, fifth sentence, reference was made to Bryson's use of writing technologies as analogous to "artificial intelligence" technologies as an example of her assumption that human intelligence is improved by the latter. Her argument there seems to be, "if writing makes you smarter, so does AI". My subsequent point about the labor of producing intelligent machines therefore in no way applies to writing, either as tool for various ends or the craft of using it.

It's also not clear how the discussion of Ludditism is relavant. Ludd was upset about the loom. He saw it, perhaps quite rightly, as a way to drive the weaver's guilds out of business so that labor-compensation could be minimized for the more unskilled workers in the newly erected factories. His solution was a simple one. Just demolish the factories. No one is suggesting that here. And if someone wants to call anyone who has objections to a deregulated market in artificially intelligent products a "Luddite", it can be safely assumed that the distinction between mass production of textiles and advanced production of intelligent machines is not well observed.

在我看来,你的最后两段似乎是布莱森立场的合理论证。如果我们不得不花大量的时间研究机器能解决的问题,也就是说,为一个问题提供正确的答案,我们就会比不这样做的人“不聪明”。所提供的计算器示例在这个意义上是典型的。尽管如此,即使我们因此变得更擅长数学,能够处理更复杂的数学问题,我们还不清楚仅仅是数学能力就等同于智力;而且计算器仍然需要由劳动者来生产,这些劳动者可能会在任何时候停止自己的劳动,剥夺计算器使用者的数学智能,因此那些从一开始就没有使用过计算器的人可能会过得更好。

But the point under discussion here, at present anyway, the point I was trying to make, concerns the question of whether one needs to know something about wisdom before she/he can identify it in the event of its appearance or not. And this you haven't answered. Beyond telling your readers that it's not a game of cups (fourth sentence above) and suggesting that there are different kinds of wisdom, which I think is a good point, the issue of how you know what it isn't without knowing anything about what it is has not been addressed. Are you saying that if you knew what wisdom was already, you would have made yourself wiser as well?

I've read and agree to abide by the Community Guidelines
Tim Smith's picture

Tim Smith

Wednesday, January 26, 2022 -- 10:18 PM

'Reductio ad eliminandum' is

'Reductio ad eliminandum' is a made-up term to respond to your question 'Is all wisdom a pile of crap?' In any case, you shouldn't ever be surprised that I miss things. I do—all the time. For example, I think I missed your argument altogether. You're asking more questions than making arguments. I believe you are trying to goad me a bit. I am easily provoked. That is where the fun is, perhaps.

你不需要知道智慧来辨别什么不是智慧,这和在你测试之前知道智慧是不一样的。任何人都可以检验一个假设,如果它是正确的,那就不是智慧,就像关于什么不是智慧的知识一样。(我回避这个问题了吗?)这就是科学和哲学的现状。

Analytically it may seem not not implies wisdom, but rarely if ever, OK, I'm goaded to say never, but then that would make the entire argument false, so rarely if ever is as far as I will go, rarely if ever does one find oranges. It is almost always a better-tasting apple. The best policy, the only policy really, is to say 'I don't know.' At least from there, you can move forward. If that means you ask a lot of questions, so be it. No one is wise in this world, with the possible exception of the person casting the questions. There is causality, but ( and it is a big but – I will not lie) even that gets whacky when you look at it close up. Some people win, many others lose, which doesn't make them wiser.

幸运的是,大多数事情都有可重复性。人工智能很快就掌握了这些东西,我想说,让人工智能来做吧。但内德·路德不是这么说的。不管你怎么说,勒德分子是相关的。我们已经看到工作岗位消失在优秀的老式人工智能、专家系统和更花哨的机器学习技术的脚下。技术创造了就业机会,但在很大程度上,它带走了就业机会。

We need to create new jobs while harnassing the profit from AI for the good of all humankind. At some point, we may have to give up work altogether. Many have already. Some of the houseless in our streets are computer analysts whose punchcard/spreadsheet mentalities have been transformed into algorithms that don't take any input or lookup table. Others have more direct paths to displacement.

AI欠这些人一顿热饭和一个温暖干燥的地方住。

Have you read the Culture series by Ian Banks? That is where we are going in many ways. Consider Phlebas.

这不是一个论点;这是我信仰的陈述,它比任何智慧都更不稳定。

I've read and agree to abide by the Community Guidelines
Daniel's picture

Daniel

Monday, January 31, 2022 -- 6:27 PM

Just for the record, the

Just for the record, the question paraphrased in the first sentence above was not asking about "all wisdom", but what you thought about any wisdom. I happen to agree with you that it's something one can see without knowing what it is beforehand. But I wanted to see what your argument was for that before sharing mine. Compared with personhood, which belongs to everyone, wisdom belongs only to a select few. Hence a wise one is called "bright", the one who appears distinguished from the rest. And that further implies that it always belongs to someone else; and therefore to see it is analogous to seeing a physical object which, although even if one has never seen it before, it is recognized by the exclusivity of it appearance as something which one could be one's self, but isn't at the time. If one thinks of one's self as wise, that's a value-judgement rather than one thought to be objective of something which appears.

I've read and agree to abide by the Community Guidelines
Tim Smith's picture

Tim Smith

Monday, January 31, 2022 -- 6:39 PM

There is little objective

There is little objective wisdom that can't be questioned in some way.

This was a fun interlocution for me. Don't ever take offense to my comments. Sorry if I come off that way. If I do, that is objectively unwise on my part.

I've read and agree to abide by the Community Guidelines
sminsuk's picture

sminsuk

Sunday, January 2, 2022 -- 2:17 PM

There are obviously a lot of

There are obviously a lot of fascinating angles to explore in this topic even if focused solely on robots/AI, but I hope the discussion will range a little further, because I think it can inform some other seemingly unrelated topics. I can think of two:

1) The abortion debate. People argue over whether or not the fetus is "alive" or whether it's "human", but as a biologist I can tell you that that is a silly way to frame the question. Of course fetuses and even embryos are alive, and human, but those facts don't illuminate the question at all, and life does not "begin at conception" (or "begin", at all). Sperm and unfertilized eggs are also alive, and are human, and I don't think most people would seriously try to protect them from "murder". The question is not any of those things; it is more properly, whether the fetus/embryo is a *PERSON*. But to answer that, we have to ask, what exactly is a "person", anyway? And we'll have to do the same, to address the robot question. And that's why I think that's the more fundamental question, which underlies both of these seemingly disparate topics.

2) In U.S. law, corporations are deemed "persons", and this has far reaching consequences for the economy, and for politics, and for democracy itself. Now this may be a bit of a red herring, since that's legalese, and the law does distinguish between corporations and what they call "natural" persons. Nonetheless I think this issue could likewise benefit from asking, just what is a person, anyway?

P.S. I normally don't think too much about it, but in this particular case I am highly amused by the Captcha asking me to confirm that "I'm not a robot" before it will allow me to post!

I've read and agree to abide by the Community Guidelines
Tim Smith's picture

Tim Smith

Sunday, January 2, 2022 -- 2:31 PM

Too bad we can't react with

Too bad we can't react with emoticon - or this would have gotten a smiley face :-)

"P.S. I normally don't think too much about it, but in this particular case I am highly amused by the Captcha asking me to confirm that "I'm not a robot" before it will allow me to post!"

I've read and agree to abide by the Community Guidelines
Tim Smith's picture

Tim Smith

Thursday, January 13, 2022 -- 12:49 PM

sminsuk,

sminsuk,

看起来这部剧并没有像你在这里做的那样,但正如乔安娜所说的…大多数关于人工智能的想法是心理的反映。

也许我会多谈谈你在博客里说的话。我不知道堕胎是否能像“联合公民”那样被自由讨论,但两者都要求在赋予艺术品人格的前提下进行讨论。

I've read and agree to abide by the Community Guidelines
Harold G. Neuman's picture

Harold G. Neuman

Tuesday, January 4, 2022 -- 4:04 PM

I won't hazard any guesses

我就不做任何猜测了。尤其是在时事、社会的背景下;文化;政治或其他。在阅读哲学思想家的观点时,我被内格尔和戴维森所吸引。内格尔先生写道:现实是关于事情可能是怎样的;而不是他们可能的样子。戴维森说,有命题态度,包括信仰;欲望;期望; obligation and so on. I do not recall him mentioning love, but he may as well have done so. There are those who have a dog in the hunt;horse in the race, for artificial intelligence. They are proponents of something I am calling contextual reality. They have a stake in the quest---it is within the context of their life work. A bit like finding a ' new' dinosaur, with a dagger-bladed tail. Much more exotic than the former tank-like behemoth...

Contextual reality (CR) is pretty old. It emerged from intuition and that,desire for at least fifteen minutes of fame. Sure, it can be argued: everything is real. All that is necessary is an eye; and, a beholder. It is more pertinent now because of mass and popular culture. And, the higher our level of technological achievement, the greater our acumen of contextual reality.

这是给德威尔斯的。还有任何可能正在阅读的人。

I've read and agree to abide by the Community Guidelines
Daniel's picture

Daniel

Tuesday, January 4, 2022 -- 6:46 PM

The question of whether

The question of whether robots are persons or not can be reproduced in converse form as whether or not persons are (already) robots. Certainly the human body is explained as a machine, and therefore if one is to know anything about it, it's understood as a machine. But as it doesn't have an identifiable designer, it appears to be non-robotic, while nothing irrefutably confirms that that is not the case. So if one can't say human persons are not irrefutably non-robotic, could one say that machines can be demonstrably intelligent and therefore human persons in that sense? For if that's the case, there would be a strong case for considering apparent humanity as robotic in actuality.

首先,机器的智能被称为“人工”,它可以有各种各样的含义。例如,人造花看起来只是真正的花。因此,这种人工被排除在外,因为机器的智能被描述为真实的。显然,一种相同的智能必须通过两种不同的方式产生:人工制造的方式是设计、机械和构建。自然的那种不是设计的,自发的,也不是建造的。这里的主要区别就是判断的区别,判断在后者是本质的,而判断在前者是绝对没有的。只要判断的权力被部署在人类思维的智能结果的生产中,因此,人类不可能是机器人,因此机器人本身也不可能是真正的人类;当然,它们也可能看起来像它们,比如人造花。由此看来,感性和知性在机械上都是可复制的。It's the faculty of judgement which appears to be categorically excluded.

I've read and agree to abide by the Community Guidelines
Harold G. Neuman's picture

Harold G. Neuman

Wednesday, January 5, 2022 -- 6:05 AM

All good thoughts and well

All good thoughts and well-presented. I allowed as how I would not hazard guesses. And, I sketched a notion of contextual reality. What bothers me some is not the field of AI, in itself. There are legitimate motives for pursuit of this branch of science; not the least of which its' potential for bettering the human condition. Though I am not certain all motives are so altruistic. If we, for the moment, credit science with improving our chances of continuance, we ought to embrace possible means to that end. What bothers me is an implicit nod to everlasting life that quietly accompanies the AI evolution. It is a sweet story, as told in Christian doctrine and texts. But it has no rational basis. Living things reproduce. Genes and genetic lines are continued...or not. Christian folk and others are admonished to have faith. As I said, it is a sweet story.

The larger question here, seems to me, is: SHOULD robots be people? The answer predicates on whether there is/are legitimate reason(s) for such an outcome. One scenario pits one peoples' robot army against that of another, to avoid waste of human life. This notion has long been fodder for sci-fy stories. But, if the robots are people, the objective is lost. Or, as the man says, you can't have it both ways. It erases even the vaguest idea of contextual reality, while maintaining itself the purest form! Taking this absurdity a quantum leap forward, there would always be critical things robot people could not do: conceive and bear children; give blood; donate organs. The iceberg looms. And, they could not expect the sure and present hope of everlasting life---, or could they?

I've read and agree to abide by the Community Guidelines
tartarthistle's picture

tartarthistle

Saturday, January 8, 2022 -- 6:20 PM

Just curious if philosophers

只是好奇哲学家是否已经变成了机器人?

No, seriously. Have "philosophers" sort of lost touch/become disconnected from what they claim to love?

It seems to me they have. It seems to me something rather like nothing has taken up shop in their place...

I've read and agree to abide by the Community Guidelines
Daniel's picture

Daniel

Sunday, January 9, 2022 -- 6:09 PM

How is it clear that the term

How is it clear that the term "philosopher" is not genitive antecedent? I mean, wouldn't it be better translated from the Greek, if instead of "lover of wisdom", which doesn't make much sense to me either, it's translated as "wisdom of lovers", which in my view is a better indication of the respective domain of study, rather than a uniform eagerness of its proponents. Plato's image in the Phaedrus of love as a horse-drawn chariot pulled in two different directions seems to suggest this.

I've read and agree to abide by the Community Guidelines
Tim Smith's picture

Tim Smith

Thursday, January 13, 2022 -- 12:54 PM

tartarthistle,

tartarthistle,

当然,这个节目看起来有点机械。

在我们的生活中,哲学确实发生了变化,但我不会说机器人的变化比政治、个人和/或社会的变化更大。

I'm not sure I get what you are saying here, but robots will likely outlive the metaphor as AI philosophers outdistance their human counterparts.

I've read and agree to abide by the Community Guidelines
Daniel's picture

Daniel

Saturday, January 15, 2022 -- 6:38 PM

Forum participant Thistle is

Forum participant Thistle is clearly stating three distinct theses here:
1) If something loves, it is not a robot.
2) If a philosopher doesn't love, it may or may not be a robot, and
3) If a philosopher loves, is loving, or claims to love something which is in fact nothing, it looks like a robot.

虽然这一论点不是结论性的,但它为我提供了一个全面的洞察力,让我知道是什么把人与其他事物区分开来。第三个前提中有两件事值得考虑。一种是哲学家们根据其实践或职业的名称所宣称的爱:智慧(索菲亚)。这种看法被纠正,使第一部分的复合词为属格先行词。与philostorgia(爱的温柔)一样,在古希腊复合词中更常见的翻译是属源先行词。这也与从泰勒斯到柏拉图的希腊哲学更一致。“智慧”描述的是要研究的对象的种类,而不是如何研究它们。“情人”(在某些翻译中是“朋友”)描述了这样做的研究者。因此,哲学应该被称为爱人的智慧,而不是爱情的智慧。

参与者Thistle提出的第二点是,第一,什么都不是,怎么可能是,第二,与此相关,为什么,如果一个人喜欢什么都不是的东西,或者相反,机器人在外观上压倒了人格。关于第一点,没有什么是所有其他事物的主要特征,这就是它“是”的方式。在虚无的感觉中,所有的东西都后退了。倾听寂静就是一个很好的例子。没有什么,就没有什么特别的东西。关于第二点,对虚无的爱在观察者对非虚无的爱的感知中占有一席之地,例如另一个人。因此,这样的一个看起来像一个机器人。虽然不是结论性的,但这条推理路线表明,在广播期间没有讨论的途径。如果我们接受机器人在真正意义上可以被称为“智能”,那么它们也能做哲学吗?

I've read and agree to abide by the Community Guidelines
Harold G. Neuman's picture

Harold G. Neuman

Sunday, January 9, 2022 -- 10:19 AM

Science fiction has hinted at

Science fiction has hinted at robot personhood, or at least, personality. Humor comes to mind as an attribute. However, if a robot cannot laugh at his/her own humor, that humor is no more than mimicry. The same would seem to apply to other human sensibilities: empathy; symparhy; remorse; love;compassion and the like. We have seen some pretty amazing feats robots can be taught (programmed) to do. Dancing is one I saw recently. I almost laughed, but the choreography was too impressive for that response. I try to exclude the word impossible from vocabulary. and try harder to think better. For me, though, artificial intelligence is only that. For now. Until the impossible becomes much more...

I've read and agree to abide by the Community Guidelines
Daniel's picture

Daniel

Wednesday, January 12, 2022 -- 5:39 PM

Zoeon yeloion-- The laughing

Zoeon yeloion——笑动物。这可能是准确的,因为鬣狗并没有明显地表现出这种行为。人们也可以说哭泣是同样的道理。因为尽管许多动物会流泪,但据我所知,只有人类会对情感痛苦做出这样的反应。所以我的想法很有见地。问题在于,不同的笑的原因是不同的。例如,对下肋骨周围浅表组织的快速手动摇动的笑,是一种纯粹的机械反应,因此可以人工复制。性别歧视和种族主义的幽默也属于这一范畴,因为这里的笑声不是对搞笑的回应,而是对冒犯的回应。冒犯性可以根据人口统计决定进行分类并放入某种或另一种程序中。这是对什么是有趣的判断,似乎绝对排除了人工制作。 The reasons for this are upon examination considerably informative. They have to do with what produces a judgement (e.g. a sensorial stimulation) and what a judgement produces (e.g. something suddenly understood). But I'll let the point stand as written, as still undecided is the question of whether, if an analysis of funniness is not itself funny, can it be true?

I've read and agree to abide by the Community Guidelines
tartarthistle's picture

tartarthistle

Wednesday, January 19, 2022 -- 10:48 AM

The thistle just wanted

蓟只是想让每个人都注意到,现在似乎没有人注意太多。这是所有。戳。戳。戳。所有的男人都是会死的,但并不是所有的男人都冒着惹恼他们的妻子、孩子和政治领袖的风险,赤脚穿着破衣服在公共场合走来走去,就为了向任何关注事物的人证明:事物的外观(有效性)和事物的本质(真理)在概念上是两个不同的东西,而不是两个平等的东西。两者都是必要的,都是重要的,但都不是相同的东西,不相等的东西。真正的哲学家知道两者的区别,欣赏两者,热爱智慧。他们不会将两者对立起来,而是知道如何区分一件事(心灵)和另一件事(物质),热爱这种智慧,并不断地去寻找它。

总之,这就是蓟所想的…戳。戳。戳……

I've read and agree to abide by the Community Guidelines
Daniel's picture

Daniel

Wednesday, January 19, 2022 -- 5:33 PM

Who says truth and validity

Who says truth and validity are identical? While I don't buy the "love of wisdom" business, partly because in my view the Greek word is mistranslated in common parlance (as explained above), your pairing of these as two distinct elements or things seems to me to be both sound and correct. With regards to the logic of argument, truth is a property of premises, whereas validity, from the Latin "validus", meaning "strong", is a characteristic of the connection of one premise to another. As an argument is described as a series of premises, one of which is a conclusion, both are "necessary", but only for the logic of argument. The abandonment of the use of argument in public discussion seems to be an attribute of the current period. Is that what your claim that "no one seems to be noticing much" has to do with?

I've read and agree to abide by the Community Guidelines
Harold G. Neuman's picture

Harold G. Neuman

Wednesday, January 19, 2022 -- 8:11 AM

Hmmmmmm...:::

Hmmmmmm...:::
“随着自主机器人的到来……”我一直在纠结到底是什么?开放。这似乎意味着,机器人很快就会获得自我编程的能力。如果这是由人类程序员教给他们的,那么人类能教给他们多少创造力来实现预期的结果呢?创造力可以被教会吗?“从最广泛的意义上说……(参见塞拉斯关于事物联系在一起的声明)具有创造力,只要这些能力更多的是适应而不是创造。只要程序员能够教授,人工智能工具就可以学习。国际象棋机器已经证明了这一点。 But, are machines that are capable of besting human chessmasters creative or, are they marvelous manipulators of data? Is there a difference? I think there is. And, I THINK anyone who thinks about it will think so too.

There is something akin to entropy playing out here. Robot autonomy is not easy to reach. Its' advent is tentative. I think.

I've read and agree to abide by the Community Guidelines
Harold G. Neuman's picture

Harold G. Neuman

Wednesday, January 19, 2022 -- 8:25 AM

Dear Thistle:

Dear Thistle:
我喜欢你的态度!
Neuman.

I've read and agree to abide by the Community Guidelines