For Artificial Neural Network, I already tried many structures and the best result was used the number of hidden layer neurons same with the input layer neurons. Is there any special reason for that?
Like Nacéra Benamrane already mentioned, there is no clear rule on this.
For example, a basic classification problem the number could be lower if an 'easy' seperation can be found during the training process. A higher number of hidden neurons could indicate that the problem is first needed to be transfered in a higher dimensional space where a solution is found.
All that is very dependent on your problem you're trying to solve and your data for training. Also neurol nets (especially more complex ones) are hard to analyse due to the amount of parameters and their black box nature.