搜索关注在线英语听力室公众号:tingroom,领取免费英语资料大礼包。
(单词翻译)
This is Scientific American's 60-second Science, I'm Christopher Intagliata.
Last year, Google unveiled Duplex, its artificial-intelligence-powered assistant.
"How can I help you?"
"Hi, I'm calling to book a women's haircut for a client. Um, I'm looking for something on May 3rd."
That's a robot.
"Sure, give me one second."
"Mm-hmm."
"For what time are you looking for around?"
The machine assistant never identified itself as a bot in the demo. And Google got a lot of flak for that. They later clarified that they would only launch the tech with "disclosure built in."
But therein lies a dilemma1, because a new study in the journal Nature Machine Intelligence suggests that a bot is most effective when it hides its machine identity.
"That is, if it is allowed to pose as human."
Talal Rahwan is a computational social scientist at New York University's campus in Abu Dhabi. His team recruited nearly 700 online volunteers to play the prisoner's dilemma—a classic game of negotiation2, trust and deception—against either humans or bots. Half the time, the human players were told the truth about who they were matched up against. The other half, they were told they were playing a bot when they were actually playing a human or that they were battling a human when, in fact, it was only a bot.
And the scientists found that bots actually did remarkably3 well in this game of negotiation—if they impersonated humans.
"When the machine is reported to be human, it outperforms humans themselves. It's more persuasive4; it's able to induce cooperation and persuade the other opponent to cooperate more than humans themselves."
But whenever the bots' true nature was disclosed, their superiority vanished. And Rahwan says that points to a fundamental conundrum5. We can now build really efficient bots—that perform tasks even better than we can—but their efficiency may be linked to their ability to hide their identity—which, you know, feels ethically6 problematic.
"Those very humans who will be deceived by the machine, they are the ones who ultimately have to make that choice. Otherwise it would violate fundamental values of autonomy, respect and dignity for humans."
It's not realistic to ask people for consent before every bot-human interaction. That would, of course, reveal the bots' true identity. So we, as a society, will have to figure out if making our lives a bit easier is worth interacting with bots that pretend to be human.
"Mm-hmm."
Thanks for listening for Scientific American's 60-second Science. I'm Christopher Intagliata.
1 dilemma | |
n.困境,进退两难的局面 | |
参考例句: |
|
|
2 negotiation | |
n.谈判,协商 | |
参考例句: |
|
|
3 remarkably | |
ad.不同寻常地,相当地 | |
参考例句: |
|
|
4 persuasive | |
adj.有说服力的,能说得使人相信的 | |
参考例句: |
|
|
5 conundrum | |
n.谜语;难题 | |
参考例句: |
|
|
6 ethically | |
adv.在伦理上,道德上 | |
参考例句: |
|
|
本文本内容来源于互联网抓取和网友提交,仅供参考,部分栏目没有内容,如果您有更合适的内容,欢迎 点击提交 分享给大家。