We have a new algorithm for object recognition with neural network . If there is any error, how can we know whether the error is in the code or in the algorithm . We want to test with 100000 images and it may take us many days just for one time run.
Well, with neural networks it's always difficult. But you do want to run tests on smaller datasets first, maybe generate some synthetic control data where you know what the output should look like - depending on your model.
As for the source of error - if you want high confidence, you should probably write lots of tests for individual methods within your implementation - it is impossible to make the distinction just by evaluating on a large dataset. It might happen that you get the results that look 'right' - but there are still some borderline cases that are not being handled properly and remaining errors in the code.
I've had some similar problems when I began working on my own libraries, until I've learned that it is much better to spend a lot of time testing than to realize there was something wrong only after the research is done and you've already started writing a paper... then you'd have to throw everything away and start over. This actually happened to me once a few years ago, but that's when I learned that it pays off to be much more careful.