I am trying to run an ANN model using IBM SPSS. After every run I observe that architecture of the network and hence the synaptic weights change. May I know what is the probable reason for this or is it the nature of ANN?
Your question is why do you get different set of weights for each run ? Hope I understood the question correctly.
Generally ANN models always initialize with 'Random' weights (vectors / values) and converge unless you use some 'freeze' mechanism to keep the weights and re-use them. So, for each run you can expect different set of randomly generated weights converging towards 'same' weights.
Thank you for your kind reply. Yes the question how you understood is right.
Yes, the ANN models initialize with random weights after every run but the problem is as I mentioned that the architecture of the network changes after every run leading to different number of processing units in the Hidden layer. This will certainly change the synaptic weight distribution.
NN structure changes because you are using a self-organized-Neural-Network, Known as Kohonen Self-Organizing-Maps. This type of neural network omits neurons that are not effective and leaves-out the ones that are effective, during learning. And because you initialize the NN with random weights, you will always get a different structure depending on the random values. I think Fatima B. Ibrahim meant to say the same thing.
Thanks for your kind reply. Yes, I am using self-organized neural networks. As Fatima and you are recommending adaptive network, I will take this suggestion positively.
Sounds like IBM SPSS uses a Selective Pruning approach (based on synaptic pruning), whereby the weakest Neurons are removed from the network. Hence the weight topology has to be readjusted.
The objective of the training is to re-adjust weights to adapt the network to a given problem. Every time you re-train your network the weights will re-start with different random values. Then, a correcting formula will adapt your weight to the task at hand. So, given the random start, each re-train will produce a different set of weights that can be used to solve the same problem.