Five decades of research into artificial neural networks have earned Geoffrey Hinton the moniker of the Godfather of artificial intelligence (AI).
五十年来对人工神经网络的研究为杰弗里·辛顿赢得了“人工智能教父”的称号。
Work by his group at the University of Toronto laid the foundations for today's headline-grabbing ai models, including ChatGPT and LaMDA.
他在多伦多大学的研究小组的工作为ChatGPT和LaMDA等引人注目的人工智能模型奠定了基础。
These can write coherent (if uninspiring) prose, diagnose illnesses from medical scans and navigate self-driving cars. But for Dr Hinton, creating better models was never the end goal.
这些模型可以写出连贯(虽然平淡无奇)的散文,通过医学扫描诊断疾病,并驾驶自动驾驶汽车。但对于辛顿博士来说,创建更好的模型从来都不是最终目标。
His hope was that by developing artificial neural networks that could learn to solve complex problems, light might be shed on how the brain's neural networks do the same.
他希望通过开发能够学习解决复杂问题的人工神经网络,揭示大脑神经网络如何做到同样的事情。
Brains learn by being subtly rewired: some connections between neurons, known as synapses, are strengthened, while others must be weakened.
大脑通过巧妙地重新连接来学习:一些神经元之间的连接(突触)得到加强,而其他连接则必须削弱。
But because the brain has billions of neurons, of which millions could be involved in any single task, scientists have puzzled over how it knows which synapses to tweak and by how much.
但由于大脑有数十亿个神经元,其中数百万个可能参与任何一项任务,科学家们一直在思考大脑如何知道要调整哪些突触以及调整到何种程度。
Dr Hinton popularised a clever mathematical algorithm known as backpropagation to solve this problem in artificial neural networks.
辛顿博士推广了一种反向传播算法来解决人工神经网络中的这个问题。
But it was long thought to be too unwieldy to have evolved in the human brain.
但长期以来人们认为它太过笨重,无法在人脑中进化。
Now, as AI models are beginning to look increasingly human-like in their abilities, scientists are questioning whether the brain might do something similar after all.
现在,随着人工智能模型的能力开始越来越像人类,科学家们开始质疑大脑是否也能做类似的事情。
Working out how the brain does what it does is no easy feat.
弄清楚大脑是如何工作的绝非易事。
Much of what neuroscientists understand about human learning comes from experiments on small slices of brain tissue, or handfuls of neurons in a Petri dish.
神经科学家对人类学习的大部分理解都来自对小片脑组织或培养皿中少量神经元的实验。
It's often not clear whether living, learning brains work by scaled-up versions of these same rules, or if something more sophisticated is taking place.
人们通常不清楚活的、学习中的大脑是否按照这些相同规则的放大版本工作,或者是否发生了更复杂的事情。
Even with modern experimental techniques, wherein neuroscientists track hundreds of neurons at a time in live animals, it is hard to reverse-engineer what is really going on.
即使使用现代实验技术,神经科学家也可以在活体动物中一次追踪数百个神经元,但也很难逆向工程真正发生的事情。
One of the most prominent and longstanding theories of how the brain learns is Hebbian learning.
关于大脑如何学习的最著名和最长期的理论之一是赫布学习。
The idea is that neurons which activate at roughly the same time become more strongly connected; often summarised as "cells that fire together wire together".
这个理论是,大致同时激活的神经元会变得更加紧密地连接在一起;通常总结为“连在一起的神经元一起激活”。
Hebbian learning can explain how brains learn simple associations-think of Pavlov's dogs salivating at the sound of a bell.
赫布学习可以解释大脑如何学习简单的联想——想想巴甫洛夫的狗听到铃声就会流口水。
But for more complicated tasks, like learning a language, Hebbian learning seems too inefficient.
但对于更复杂的任务,比如学习语言,赫布学习似乎效率太低了。
Even with huge amounts of training, artificial neural networks trained in this way fall well short of human levels of performance.
即使经过大量训练,以这种方式训练的人工神经网络也远远达不到人类的表现水平。
Today's top AI models are engineered differently. To understand how they work, imagine an artificial neural network trained to spot birds in images.
当今顶级的人工智能模型的设计方式不同。要了解它们的工作原理,想象一下一个经过训练可以识别图像中鸟类的人工神经网络。
Such a model would be made up of thousands of synthetic neurons, arranged in layers.
这样的模型将由数千个合成神经元组成,按层排列。
Pictures are fed into the first layer of the network, which sends information about the content of each pixel to the next layer through theAIequivalent of synaptic connections.
图片被输入到网络的第一层,该层通过相当于人工智能的突触连接将有关每个像素内容的信息发送到下一层。
Here, neurons may use this information to pick out lines or edges before sending signals to the next layer, which might pick out eyes or feet.
在这里,神经元可以使用这些信息来挑选线条或边缘,然后再将信号发送到下一层,下一层可能会挑选出眼睛或脚。
This process continues until the signals reach the final layer responsible for getting the big call right: "bird" or "not bird".
这个过程一直持续,直到信号到达负责正确判断大问题的最终层:“鸟”或“不是鸟”。
Integral to this learning process is the so-called backpropagation-of-error algorithm, often known as backprop.
这一学习过程不可或缺的部分是所谓的反向误差传播算法,通常称为反向传播。
If the network is shown an image of a bird but mistakenly concludes that it is not, then-once it realises the gaffe-it generates an error signal.
如果向网络显示一张鸟的图像,但误认为它不是鸟,那么一旦它意识到了错误,它就会生成一个错误信号。
This error signal moves backwards through the network, layer by layer, strengthening or weakening each connection in order to minimise any future errors.
这个错误信号在网络中逐层向后移动,加强或削弱每个连接,以尽量减少未来的错误。
If the model is shown a similar image again, the tweaked connections will lead the model to correctly declare: "bird".
如果再次向模型显示类似的图像,调整后的连接将导致模型正确地宣布:“鸟”。
Neuroscientists have always been sceptical that backpropagation could work in the brain.
神经科学家一直怀疑反向传播是否能在大脑中发挥作用。
In 1989, shortly after Dr Hinton and his colleagues showed that the algorithm could be used to train layered neural networks, Francis Crick, the Nobel laureate who co-discovered the structure of DNA, published a takedown of the theory in the journal Nature.
1989年,就在辛顿博士及其同事证明该算法可用于训练分层神经网络后不久,共同发现DNA结构的诺贝尔奖获得者弗朗西斯·克里克在《自然》杂志上发表了一篇驳斥该理论的文章。
Neural networks using the backpropagation algorithm were biologically "unrealistic in almost every respect" he said.
他说,从生物学角度来看,使用反向传播算法的神经网络“几乎在各个方面都是不切实际的”。
还没有评论,快来发表第一个评论!