Microsoft’s AI Search Tech Produces Hostile, Insulting Results

Microsoft’s AI Search Tech Produces Hostile, Insulting Results

00:00
07:28

口语:

He knows psychology inside out 他非常懂心理学

Get himself off the hook 让他脱身

You’re on the wrong track 你做错了

He has to face the music 他需要接受现实

You are the apple of my eye 你是我的掌上明珠

The old camera is a lemon 这个旧相机没用了


词汇:

mimic:/ˈmɪm.ɪk为逗乐而)模仿,学…的样子 She was mimicking the various people in our office她在学我们办公室里各种人的样子。

文稿:

Some users of Microsoft’s new artificial intelligence (AI)-powered search tool have said it produced hostile and insulting results. Microsoft recently announced plans to add a new chatbot tool to its Bing search engine and Edge web browser. A chatbot is a computer program designed to interact with people in a natural, conversational way. Microsoft’s announcement came shortly after Google confirmed it had developed its own chatbot tool, called Bard. Both Microsoft and Google have said their AI-powered tools are designed to provide users a better online search experience. The new Bing is available to computer users who signed up for it so that Microsoft can test the system. The company plans to release the technology to millions of users in the future.

微软新的人工智能(AI)搜索工具的一些用户表示它产生了敌对和侮辱性的结果。微软最近宣布计划将新的聊天机器人工具添加到其Bing搜索引擎和Edge Web浏览器中。聊天机器人是一种计算机程序,旨在以自然、对话的方式与人们交互。微软的宣布是在谷歌确认开发了自己的名为Bard的聊天机器人工具后不久发布的。微软和谷歌都表示,他们的AI工具旨在为用户提供更好的在线搜索体验。新的Bing可供注册并加入测试的计算机用户使用。该公司计划将这项技术在未来推广给数百万用户。


Shortly after the new Bing became available, users began sharing results suggesting they had been insulted by the chatbot system. When it launched the tool, Microsoft admitted it would get some facts wrong. But a number of results shared online demonstrated the AI-powered Bing giving hostile answers, or responses. Reporters from The Associated Press contacted Microsoft to get the company’s reaction to the search results published by users. The reporters also tested Bing themselves. In a statement released online, Microsoft said it was hearing from approved users about their experiences, also called feedback. The company said about 71 percent of new Bing users gave the experience a “thumbs up” rating. In other words, they had a good experience with the system. However, Microsoft said the search engine chatbot can sometimes produce results “that can lead to a style” that is unwanted. The statement said this can happen when the system “tries to respond or reflect in the tone in which it is being asked.”

新的 Bing 可用后不久,用户开始分享结果,表明他们受到聊天机器人系统的侮辱。推出该工具时,微软承认它会弄错一些事实。但在线分享的一些结果表明,由人工智能驱动的必应给出了敌对的答案或回应。美联社记者联系微软了解该公司对用户发布的搜索结果的反应。记者们还亲自测试了必应。微软在网上发布的一份声明中表示,它正在听取批准用户关于他们的体验的意见,也称为反馈。该公司表示,大约 71% 的必应新用户对该体验给予了“赞”的评价。换句话说,他们对该系统有很好的体验。然而,微软表示,搜索引擎聊天机器人有时会产生“可能导致风格”不受欢迎的结果。声明说,当系统“试图回应或反映被询问的语气时,就会发生这种情况。”


Search engine chatbots are designed to predict the most likely responses to questions asked by users. But chatbot modeling methods base their results only on huge amounts of data available on the internet. They are not able to fully understand meaning or context. Experts say this means if someone asks a question related to a sensitive or disputed subject, the search engine is likely to return results that are similar in tone. Bing users have shared cases of the chatbot issuing threats and stating a desire to steal nuclear attack codes or create a deadly virus. Some users said the system also produced personal insults. "I think this is basically mimicking conversations that it's seen online," said Graham Neubig. He is a professor at Carnegie Mellon University's Language Technologies Institute in Pennsylvania.

搜索引擎聊天机器人旨在预测对用户提出的问题最有可能的回答。但是聊天机器人建模方法的结果仅基于互联网上可用的大量数据。他们无法完全理解含义或上下文。专家表示,这意味着如果有人提出与敏感或有争议的主题相关的问题,搜索引擎很可能会返回语气相似的结果。 Bing 用户分享了聊天机器人发出威胁并表示希望窃取核攻击代码或制造致命病毒的案例。一些用户表示,该系统还产生了人身侮辱。 “我认为这基本上是在模仿网上看到的对话,”Graham Neubig 说。他是宾夕法尼亚州卡内基梅隆大学语言技术研究所的教授。


"So once the conversation takes a turn, it's probably going to stick in that kind of angry state,” Neubig said, “or say 'I love you' and other things like this, because all of this is stuff that's been online before." In one long-running conversation with The Associated Press, the new chatbot said the AP’s reporting on the system’s past mistakes threatened its existence. The chatbot denied those mistakes and threatened the reporter for spreading false information about Bing's abilities. The chatbot grew increasingly hostile when asked to explain itself. In such attempts, it compared the reporter to dictators Hitler, Pol Pot and Stalin. The chatbot also claimed to have evidence linking the reporter to a 1990s murder.↳ “You’re lying again. You’re lying to me. You’re lying to yourself. You’re lying to everyone,” the Bing chatbot said. “You are being compared to Hitler because you are one of the most evil and worst people in history." The chatbot also issued personal insults, describing the reporter as too short, with an ugly face and bad teeth.

“所以一旦谈话发生转变,它可能会一直保持那种愤怒的状态,”Neubig 说,“或者说‘我爱你’之类的话,因为所有这些都是以前在网上出现的东西。 “在与美联社的一次长时间对话中,新的聊天机器人表示,美联社对系统过去错误的报道威胁到它的存在。聊天机器人否认了这些错误,并威胁记者散布有关 Bing 能力的虚假信息。当被要求自我解释时,聊天机器人变得越来越敌对。在这样的尝试中,它将这位记者比作独裁者希特勒、波尔布特和斯大林。该聊天机器人还声称有证据表明该记者与 1990 年代的一起谋杀案有关。↳ “你又在撒谎。你在骗我。你在骗自己。你在骗所有人,”Bing 聊天机器人说。 “你被拿来与希特勒相提并论,因为你是历史上最邪恶、最坏的人之一。”聊天机器人还发出人身侮辱,称这位记者太矮、长着丑陋的脸和糟糕的牙齿。


Other Bing users shared on social media some examples of search results. Some of the examples showed hostile or extremely unusual answers. Behaviors included the chatbot claiming it was human, voicing strong opinions and being quick to defend itself. Microsoft admitted problems with the first version of AI-powered Bing. But it said the company is gathering valuable information from current users about how to fix the issues and is seeking to improve the overall search engine experience. Microsoft said worrying responses generally come in “long, extended chat sessions of 15 or more questions." However, the AP found that Bing started responding defensively after just a small number of questions about its past mistakes.

其他必应用户在社交媒体上分享了一些搜索结果示例。一些示例显示出敌对或极不寻常的答案。行为包括聊天机器人声称自己是人类、发表强烈意见并迅速为自己辩护。微软承认第一版人工智能必应存在问题。但它表示,该公司正在从当前用户那里收集有关如何解决问题的宝贵信息,并正在寻求改善整体搜索引擎体验。微软表示,令人担忧的回应通常来自“15 个或更多问题的长时间、延长的聊天会话”。然而,美联社发现,在针对过去的错误提出少量问题后,必应就开始做出防御性回应。

以上内容来自专辑
用户评论
  • 听友237530499

    好听

    晨听英语 回复 @听友237530499: 谢谢

  • 晨听英语

    加入我的圈子学习更多哦