搜索关注在线英语听力室公众号:tingroom,领取免费英语资料大礼包。
(单词翻译)
One is that we should avoid an arms race and lethal1 autonomous2 weapons.
第一,我们需要避免军备竞赛,以及致命的自动化武器出现。
The idea here is that any science can be used for new ways of helping3 people or new ways of harming people.
其中的想法是,任何科学都可以用新的方式来帮助人们,同样也可以以新的方式对我们造成伤害。
For example, biology and chemistry are much more likely to be used for new medicines or new cures than for new ways of killing4 people,
例如,生物和化学更可能被用来制造新的医药用品,而非带来新的杀人方法,
because biologists and chemists pushed hard -- and successfully -- for bans on biological and chemical weapons.
因为生物学家和化学家很努力,也很成功地,在推动禁止生化武器的出现。
And in the same spirit, most AI researchers want to stigmatize5 and ban lethal autonomous weapons.
基于同样的精神,大部分的人工智能研究者也在试图指责和禁止致命的自动化武器。
Another Asilomar AI principle is that we should mitigate6 AI-fueled income inequality.
另一条阿西洛玛人工智能会议的原则是,我们应该要减轻由人工智能引起的收入不平等。
I think that if we can grow the economic pie dramatically with AI
我认为,我们能够大幅度利用人工智能发展出一块经济蛋糕,
and we still can't figure out how to divide this pie so that everyone is better off, then shame on us.
但却没能相处如何来分配它才能让所有人受益,那可太丢人了。
Alright, now raise your hand if your computer has ever crashed.
那么问一个问题,如果你的电脑有死机过的,请举手。
Wow, that's a lot of hands. Well, then you'll appreciate this principle that we should invest much more in AI safety research
哇,好多人举手。那么你们就会感谢这条准则,我们应该要投入更多以确保对人工智能安全性的研究,
because as we put AI in charge of even more decisions and infrastructure7,
因为我们让人工智能在主导更多决策以及基础设施时,
we need to figure out how to transform today's buggy and hackable computers into robust8 AI systems that we can really trust,
我们要了解如何将会出现程序错误以及有漏洞的电脑,转化为可靠的人工智能,
because otherwise, all this awesome9 new technology can malfunction10 and harm us, or get hacked11 and be turned against us.
否则的话,这些了不起的新技术就会出现故障,反而伤害到我们,或被黑入以后转而对抗我们。
And this AI safety work has to include work on AI value alignment12,
这项人工智能安全性的工作必须包含对人工智能价值观的校准,
because the real threat from AGI isn't malice13, like in silly Hollywood movies, but competence14
因为AGI会带来的威胁通常并非出于恶意,就像是愚蠢的好莱坞电影中表现的那样,而是源于能力,
AGI accomplishing goals that just aren't aligned15 with ours.
AGI想完成的目标与我们的目标背道而驰。
For example, when we humans drove the West African black rhino16 extinct,
例如,当我们人类促使了西非的黑犀牛灭绝时,
we didn't do it because we were a bunch of evil rhinoceros17 haters, did we?
并不是因为我们是邪恶且痛恨犀牛的家伙,对吧?
We did it because we were smarter than them and our goals weren't aligned with theirs.
我们能够做到只是因为我们比它们聪明,而我们的目标和它们的目标相违背。
But AGI is by definition smarter than us,
但是AGI在定义上就比我们聪明,
so to make sure that we don't put ourselves in the position of those rhinos18 if we create AGI,
所以必须确保我们别让自己落到了黑犀牛的境遇,如果我们发明AGI,
we need to figure out how to make machines understand our goals, adopt our goals and retain our goals.
首先就要解决如何让机器明白我们的目标,选择采用我们的目标,并一直跟随我们的目标。
1 lethal | |
adj.致死的;毁灭性的 | |
参考例句: |
|
|
2 autonomous | |
adj.自治的;独立的 | |
参考例句: |
|
|
3 helping | |
n.食物的一份&adj.帮助人的,辅助的 | |
参考例句: |
|
|
4 killing | |
n.巨额利润;突然赚大钱,发大财 | |
参考例句: |
|
|
5 stigmatize | |
v.污蔑,玷污 | |
参考例句: |
|
|
6 mitigate | |
vt.(使)减轻,(使)缓和 | |
参考例句: |
|
|
7 infrastructure | |
n.下部构造,下部组织,基础结构,基础设施 | |
参考例句: |
|
|
8 robust | |
adj.强壮的,强健的,粗野的,需要体力的,浓的 | |
参考例句: |
|
|
9 awesome | |
adj.令人惊叹的,难得吓人的,很好的 | |
参考例句: |
|
|
10 malfunction | |
vi.发生功能故障,发生故障,显示机能失常 | |
参考例句: |
|
|
11 hacked | |
生气 | |
参考例句: |
|
|
12 alignment | |
n.队列;结盟,联合 | |
参考例句: |
|
|
13 malice | |
n.恶意,怨恨,蓄意;[律]预谋 | |
参考例句: |
|
|
14 competence | |
n.能力,胜任,称职 | |
参考例句: |
|
|
15 aligned | |
adj.对齐的,均衡的 | |
参考例句: |
|
|
16 rhino | |
n.犀牛,钱, 现金 | |
参考例句: |
|
|
17 rhinoceros | |
n.犀牛 | |
参考例句: |
|
|
18 rhinos | |
n.犀牛(rhino的复数形式) | |
参考例句: |
|
|
本文本内容来源于互联网抓取和网友提交,仅供参考,部分栏目没有内容,如果您有更合适的内容,欢迎 点击提交 分享给大家。