You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
The reservoir model is surprisingly successful, and our bidirectional spiking networks share much in common with those models.
A key feature of the reservoir is that they do not learn -- learning only happens in the output decoding layer.
Computationally, one potential advantage of not learning is that it preserves the full high dimensional structure of the random initial network. Learning tends to drive networks toward lower dimensional representations because the learning signal itself is typically lower dimensional, and often weight changes push a large number of synapses in the same direction (up or down), pushing them up against limits. Hebbian learning in particular typically drives all neurons toward representing the low-order principal component of variance in the inputs.
So, might we see advantages by significantly reducing the amount of learning in our models? Particularly when the model is just "going forward" or nothing else of interest is happening, we could just turn it off. This would also represent a major computational savings as nonlearning is much faster.
Biologically, modulation by ACh and DA is known to significantly affect learning, and would restrict learning to points of salience.
reacted with thumbs up emoji reacted with thumbs down emoji reacted with laugh emoji reacted with hooray emoji reacted with confused emoji reacted with heart emoji reacted with rocket emoji reacted with eyes emoji
-
The reservoir model is surprisingly successful, and our bidirectional spiking networks share much in common with those models.
A key feature of the reservoir is that they do not learn -- learning only happens in the output decoding layer.
Computationally, one potential advantage of not learning is that it preserves the full high dimensional structure of the random initial network. Learning tends to drive networks toward lower dimensional representations because the learning signal itself is typically lower dimensional, and often weight changes push a large number of synapses in the same direction (up or down), pushing them up against limits. Hebbian learning in particular typically drives all neurons toward representing the low-order principal component of variance in the inputs.
So, might we see advantages by significantly reducing the amount of learning in our models? Particularly when the model is just "going forward" or nothing else of interest is happening, we could just turn it off. This would also represent a major computational savings as nonlearning is much faster.
Biologically, modulation by ACh and DA is known to significantly affect learning, and would restrict learning to points of salience.
Beta Was this translation helpful? Give feedback.
All reactions