For one thing, neurons mostly send information in one direction.
首先,神经元主要朝一个方向发送信息。
For backpropagation to work in the brain, a perfect mirror image of each network of neurons would therefore have to exist in order to send the error signal backwards.
因此,要使反向传播在大脑中发挥作用,每个神经元网络必须存在一个完美的镜像,以便将误差信号反向发送。
In addition, artificial neurons communicate using signals of varying strengths.
此外,人工神经元使用不同强度的信号进行通信。
Biological neurons, for their part, send signals of fixed strengths, which the backprop algorithm is not designed to deal with.
生物神经元发送固定强度的信号,而反向传播算法并非为处理这种信号而设计。
All the same, the success of neural networks has renewed interest in whether some kind of backprop happens in the brain. There have been promising experimental hints it might.
尽管如此,神经网络的成功重新引起了人们对大脑中是否会发生某种反向传播的兴趣。有前景的实验暗示它可能会。
A preprint study published in November 2023, for example, found that individual neurons in the brains of mice do seem to be responding to unique error signals, one of the crucial ingredients of backprop-like algorithms long thought lacking in living brains.
例如,2023年11月发表的一项预印本研究发现,小鼠大脑中的单个神经元似乎确实对独特的误差信号做出了反应,这是反向传播类算法的关键要素之一,长期以来人们认为活体大脑缺乏这种算法。
Scientists working at the boundary between neuroscience and AI have also shown that small tweaks to backprop can make it more brain-friendly.
涉足神经科学和人工智能的科学家也表明,对反向传播进行微小的调整可以使其更适合大脑。
One influential study showed that the mirror-image network once thought necessary does not have to be an exact replica of the original for learning to take place (albeit more slowly for big networks).
一项有影响力的研究表明,曾经被认为必不可少的镜像网络不必是原始网络的精确复制品即可进行学习(尽管对于大型网络来说速度会更慢)。
This makes it less implausible. Others have found ways of bypassing a mirror network altogether.
这使得它不那么难以置信。其他人已经找到了完全绕过镜像网络的方法。
If artificial neural networks can be given biologically realistic features, such as specialised neurons that can integrate activity and error signals in different parts of the cell, then backprop can occur with a single set of neurons.
如果人工神经网络可以具有生物学上真实的特征,例如可以整合细胞不同部分的活动和误差信号的专门神经元,那么反向传播就可以用一组神经元进行。
Some researchers have also made alterations to the backprop algorithm to allow it to process spikes rather than continuous signals. Other researchers are exploring rather different theories.
一些研究人员还对反向传播算法进行了修改,使其能够处理尖峰神经元而不是连续信号。其他研究人员正在探索相当不同的理论。
In a paper published in Nature Neuroscience earlier this year, Yuhang Song and colleagues at Oxford University laid out a method that flips backprop on its head.
今年早些时候,牛津大学的宋宇航和同事在《自然神经科学》上发表了一篇论文,提出了一种颠覆反向传播的方法。
In conventional backprop, error signals lead to adjustments in the synapses, which in turn cause changes in neuronal activity.
在传统的反向传播中,误差信号会导致突触的调整,进而导致神经元活动的变化。
The Oxford researchers proposed that the network could change the activity in the neurons first, and only then adjust the synapses to fit. They called this prospective configuration.
牛津大学的研究人员提出,网络可以先改变神经元中的活动,然后才调整突触以适应。他们称之为预期配置。
When the authors tested out prospective configuration in artificial neural networks they found that they learned in a much more human-like way-more robustly and with less training-than models trained with backprop.
当作者在人工神经网络中测试预期配置时,他们发现,与使用反向传播训练的模型相比,它们的学习方式更像人类——更稳健,训练更少。
They also found that the network offered a much closer match for human behaviour on other very different tasks, such as one that involved learning how to move a joystick in response to different visual cues.
他们还发现,该网络在其他非常不同的任务上与人类行为更加接近,例如涉及学习如何根据不同的视觉提示移动操纵杆的任务。
For now though, all of these theories are just that. Designing experiments to prove whether backprop, or any other algorithm, is at play in the brain is surprisingly tricky.
但目前,所有这些理论都只是理论。设计实验来证明反向传播或任何其他算法是否在大脑中发挥作用是非常棘手的。
For Aran Nayebi and colleagues at Stanford University this seemed like a problem AIcould solve.
对于斯坦福大学的Aran Nayebi和同事来说,这似乎是AI可以解决的问题。
The scientists used one of four different learning algorithms to train over a thousand neural networks to perform a variety of tasks.
科学家使用四种不同的学习算法中的一种来训练一千多个神经网络来执行各种任务。
They then monitored each network during training, recording neuronal activity and the strength of synaptic connections.
然后他们在训练期间监控每个网络,记录神经元活动和突触连接的强度。
Dr Nayebi and his colleagues then trained another supervisory meta-model to deduce the learning algorithm from the recordings.
Nayebi博士及其同事随后训练了另一个监督元模型,从记录中推断出学习算法。
They found that the meta-model could tell which of the four algorithms had been used by recording just a couple of hundreds of virtual neurons at various intervals during learning.
他们发现,通过在学习过程中以不同的时间间隔记录几百个虚拟神经元,元模型可以分辨出使用了四种算法中的哪一种。
The researchers hope such a meta-model could do something similar with equivalent recordings of a real brain.
研究人员希望这样的元模型能够对真实大脑的等效记录做类似的事情。
Identifying the algorithm, or algorithms, that the brain uses to learn would be a big step forward for neuroscience.
识别大脑用于学习的算法将是神经科学向前迈出的一大步。
Not only would it shed light on how the body's most mysterious organ works, it could also help scientists build new AI-powered tools to try to understand specific neural processes.
它不仅可以揭示人体最神秘的器官是如何工作的,还可以帮助科学家构建新的人工智能工具,以尝试了解特定的神经过程。
Whether it could lead to betterAIalgorithms is unclear. For Dr Hinton, at least, backprop is probably superior to whatever happens in the brain.
它是否能带来更好的人工智能算法尚不清楚。至少对于辛顿博士来说,反向传播可能比大脑中发生的任何事情都要好。
还没有评论,快来发表第一个评论!