27 November 2016 3 8K Report

Thanks to Prof. Erkki Oja it is well known that a neuron using simple Hebbian Learning (including a weight decay) learns to extract the first principal component of the input data.

However, I'd like to validate my intuitions about how this generalizes to a Hebbian network which makes use of competitive learning/lateral inhibition between neurons within a layer.

So, considering I have a competitive neural network model with multiple Hebbian neurons (arranged in a layer) I would assume that the neurons roughly learn to differentiate along the first principal component.

Could anybody please validate or reject this supposition or/and provide any literature regarding that topic? Most sources only consider single or chained (Sanger's rule) Hebbian neurons.

Similar questions and discussions