Is there an algorithm to train a SNN for some classification benchmark problem? If so, how are the inputs/outputs encoded (as the feature vectors of classification problems are usually numbers, but not spike trains)?
We did exactly what you describe on neuromorphic hardware. The paper came out today: http://www.pnas.org/content/early/2014/01/23/1303053111 (open access).
The encoding step is actually not trivial. In contrast, the learning can often be done using a simple perceptron-style rule.
We did exactly what you describe on neuromorphic hardware. The paper came out today: http://www.pnas.org/content/early/2014/01/23/1303053111 (open access).
The encoding step is actually not trivial. In contrast, the learning can often be done using a simple perceptron-style rule.
If you want to train your network based on a target vector, you should find a mechanism to compute the error of your output. In SNN the outputs are a set of spikes, hence you should define an standard to how interpert the output spikes. For example if you consider the winner neuron is the neuron which fires sooner than others, you must update the weights of neuron in such a way that the desired neuron fire first and the delay of other neurons in.crease
About the input, you should encode the real values into spike trains. For example for a 5 dimensional feature vector you should have five input neuron which the neuron whose corresponding feature has a higher value spikes sooner than other input neurons. In this way you have propagate the input based on some spike sort.
You can encode the stimulus in arguably two ways, either as a rate (simply encode the stimulus as a spike frequency) but this is a bit clunky and lacks resolution for many applications. The other way is the temporal (population) encoding method in Bohte's paper Pablo mentioned above. This spreads the encoding of the stimulus over a number of neurons and in this way has much more resolution. As for how you decode the output, I would recommend an individual output neuron per class for classification problems, and decode the stimulus in terms of spike frequency. I did some benchmark classification work on SNNs a number of years ago which might shed some light on the process:
Receptive field optimisation and supervision of a fuzzy spiking neural network
Hello, I am trying to implement the spiking neural network for classification. Can anyone tell me how to get the firing time of spiking neuron and how to get the delay(so that depending on the timing difference we can update the weight) at the implementation level?
Here delay is as follows, if we consider three layers neural network from the input layer to hidden layer to output layer.
The simplest way is to linearly encode the spike trains per sample for a suitable sample length i.e. fixed rate spike trains. Spiking neurons typically fire at a rate less than 50 Hz, certainly less than 100 Hz (refractory period being a limiting factor). So for example you could simply set maximum values in the benchmark training data to correspond to 50 Hz and minimum to 5 Hz say. It pretty trivial then to work out what interspike interval you need for each encoded spike train.
Of course this only really works for classification problems where the training data is bools, integers, or floats of very limited decimal points. For example benchmarks such as XOR, Iris (used by Michael above to demonstrate their Neuromorphic hardware) or Wisconsin Breast Cancer data this will be adequate. For more complex training samples (high numbers of decimal places etc) then you can use a population code such as used in the paper by Bohte (see Pablo's comment above). This spreads out the encoding over a small population of numbers (much the same way as in binary where numbers are encoded by sequences of multiple bits), you can get much more resolution this way.
Ultimately it does not matter what you do to the input data to convert it to a spiking representation since you are ultimately trying to compress it into labels in the training anyway. So feel free to scale it however you like to convert it to suitable firing rates. Obviously just be careful whichever coding scheme you are using you make it consistent across all the data.
One final comment: whilst its a good exercise to understand spiking networks by doing benchmark classification problems, it's hard to compete with traditional techniques, and with the backprop algorithm. Despite being guilty of publishing papers applying spiking to benchmarks like this myself, I now am of the strong opinion that spiking networks should be applied to temporal data such as audio and video, where it makes sense to utilise a spiking neuron for processing in a situation where we don't try to turn the whole network into a rate coder. Spiking neurons in vivo do not operate like this, they are true temporal processors implying we need to train them with temporal data not static images and numbers. Of course this is a whole different proposition but I think a more useful direction to go.
In reply to Cornelius, your latest statements reflects my thoughts precisely: Since spiking networks naturally encode time, they are inherently better suited for time series data than for static data.
Furthermore, I think that spiking networks will have their best chance of outperforming "classical" algorithms when running on dedicated neuromorphic hardware that also supports learning. Such hardware is emerging in several labs around the globe. Among the strongest proponents of neuromorphic hardware is the Human Brain Project, which will make their hardware publicly accessible in March. The challenge is to find a problem that is hard to solve using conventional hardware, but can be addressed on with spiking networks on neuromorphic hardware.