第2134期:AI Adding to Threat of Election Disinformation Worldwide

第2134期:AI Adding to Threat of Election Disinformation Worldwide

00:00
05:17

Artificial intelligence (AI) is adding to the threat of election disinformation worldwide. 

人工智能 (AI) 正在加剧全球选举虚假信息的威胁。 


The technology makes it easy for anyone with a smartphone and an imagination to create fake – but convincing – content aimed at fooling voters. 

这项技术使任何拥有智能手机和想象力的人都可以轻松创建虚假但令人信服的内容,旨在愚弄选民。 


Just a few years ago, fake photos, videos or audio required teams of people with time, skill and money. Now, free and low-cost generative artificial intelligence services from companies like Google and OpenAI permit people to create high-quality “deepfakes” with just a simple text entry. 

就在几年前,伪造照片、视频或音频需要有时间、技能和金钱的团队。 现在,谷歌和 OpenAI 等公司提供的免费且低成本的生成人工智能服务允许人们只需输入简单的文本即可创建高质量的“深度赝品”。


A wave of AI deepfakes tied to elections in Europe and Asia has already appeared on social media for months. It served as a warning for more than 50 countries having elections this year. 

与欧洲和亚洲选举相关的人工智能深度造假浪潮已经在社交媒体上出现了几个月。 这给今年即将举行选举的 50 多个国家敲响了警钟。 


Some recent examples of AI deepfakes include: 

最近的一些人工智能深度伪造的例子包括: 


— A video of Moldova’s pro-Western president throwing her support behind a political party friendly to Russia. 

——摩尔多瓦亲西方总统支持一个对俄罗斯友好的政党的视频。 


— Audio of Slovakia’s liberal party leader discussing changing ballots and raising the price of beer. 

— 斯洛伐克自由党领导人讨论改变选票和提高啤酒价格的音频。 


— A video of an opposition lawmaker in Bangladesh — a conservative Muslim majority nation — wearing a bikini. 

— 孟加拉国(一个保守的穆斯林占多数的国家)一名反对派议员穿着比基尼的视频。 


The question is no longer whether AI deepfakes could affect elections, but how influential they will be, said Henry Ajder, who runs a business advisory company called Latent Space Advisory in Britain. 

英国经营一家名为 Latent Space Advisory 的商业咨询公司的亨利·阿杰德 (Henry Ajder) 表示,问题不再是人工智能深度假货是否会影响选举,而是它们的影响力有多大。 


“You don’t need to look far to see some people ... being clearly confused as to whether something is real or not,” Ajder said. 

“你不需要看太远就能看到有些人……显然对某件事是真是假感到困惑,”阿杰德说。


As the U.S. presidential race comes closer, Christopher Wray, the director of the Federal Bureau of Investigation issued a warning about the growing threat of generative AI. He said the technology makes it easy for foreign groups to attempt to have a bad influence on elections. 

随着美国总统竞选的临近,美国联邦调查局局长克里斯托弗·雷(Christopher Wray)就生成式人工智能日益增长的威胁发出了警告。 他说,这项技术使外国团体很容易试图对选举产生不良影响。 


With AI deepfakes, a candidate’s image can be made much worse or much better. Voters can be moved toward or away from candidates — or even to avoid the polls altogether. But perhaps the greatest threat to democracy, experts say, is that the growth of AI deepfakes could hurt the public’s trust in what they see and hear. 

借助人工智能深度伪造技术,候选人的形象可能会变得更糟或更好。 选民可能会倾向于或远离候选人,甚至完全避免投票。 但专家表示,对民主的最大威胁也许是人工智能深度假货的增长可能会损害公众对其所见所闻的信任。 


The complexity of the technology makes it hard to find out who is behind AI deepfakes. Experts say governments and companies are not yet capable of stopping the problem. 

该技术的复杂性使得很难找出人工智能深度造假的幕后黑手。 专家表示,政府和企业尚无能力阻止这一问题。 


The world’s biggest tech companies recently — and voluntarily — signed an agreement to prevent AI tools from disrupting elections. For example, the company that owns Instagram and Facebook has said it will start labeling deepfakes that appear on its services. 

全球最大的科技公司最近自愿签署了一项协议,以防止人工智能工具扰乱选举。 例如,拥有 Instagram 和 Facebook 的公司表示将开始为其服务中出现的深度伪造品贴上标签。 


But deepfakes are harder to limit on apps like Telegram, which did not sign the voluntary agreement. Telegram uses encrypted messages that can be difficult to uncover. 

但像 Telegram 这样的应用程序,深度造假就更难受到限制,因为 Telegram 没有签署自愿协议。 Telegram 使用难以破解的加密消息。


Some experts worry that efforts to limit AI deepfakes could lead to unplanned results. 

一些专家担心,限制人工智能深度造假的努力可能会导致意想不到的结果。 


Tim Harper is an expert at the Center for Democracy and Technology in Washington, DC. He said sometimes well-meaning governments or companies might crush the “very thin” line between political commentary and an “illegitimate attempt to smear a candidate.” 

蒂姆·哈珀是华盛顿特区民主与技术中心的专家。 他表示,有时善意的政府或公司可能会打破政治评论与“非法抹黑候选人的企图”之间的“非常细微”的界限。 


Major generative AI services have rules to limit political disinformation. But experts say it is too easy to defeat the restrictions or use other services. 

主要的生成人工智能服务都有限制政治虚假信息的规则。 但专家表示,打破这些限制或使用其他服务太容易了。 


AI software is not the only threat. 

人工智能软件并不是唯一的威胁。 


Candidates themselves could try to fool voters by claiming events that show them in bad situations were manufactured by AI. 

候选人自己可以声称那些表现出他们处境糟糕的事件是人工智能制造的,以此来愚弄选民。 


Lisa Reppell is a researcher at the International Foundation for Electoral Systems in Arlington, Virginia. 

丽莎·雷佩尔 (Lisa Reppell) 是弗吉尼亚州阿灵顿国际选举制度基金会的研究员。 


She said, “A world in which everything is suspect — and so everyone gets to choose what they believe — is also a world that’s really challenging for…democracy.” 

她说:“一个一切都受到怀疑的世界——因此每个人都可以选择自己的信仰——这对民主来说也是一个真正的挑战。” 



以上内容来自专辑
用户评论
  • RuRu宝啊

    搬来我的小板凳