In principle yes but only if you initialise the ANN exactly the same way each time, i.e. exactly the same initial weights, and also exactly the same training set (and order, depending on which training algorithm you are using)
I think, your question should be explained. If you want to get the same answer by the same neural networks, you can. If you want to train the NN by a special data set and get answer and repeat this scenario, it would be odd. So I think your question is a bit confusing.
Theoric side of this question is yes, but all initial conditions and training algorithm must be same each other. Finally, training datas must be given to NN in same series.
I agree with Davide Roverso. However, there are additions. In principle, the neural network may initially be given the hereditary information. This information may provide a unique implementation of a limited set of functions. On the basis of this information may train and retrain the neural network within the prescribed limits. Such training may include changing the weights of synapses within a small range with respect to baseline values. As a result, the possibility to obtain the results referred to in the question, are expanding.
If what you are seeking is reproducibility of the results, you should fix the random number generator's seed (assuming all the rest of the code/parameters remain the same).
IF you initialize the weights randomly in each training cycle then you wont get the same output or the final weights values. So, try to fix the initial values of weights in each training cycle. If any parameter value is randomly chosen in your program then you won't get the same result.