Higher value of learning rate will oscillate the learning of the neural network, particularly if you are using back propagation algorithm and falls into a state of divergence. Hence, in general, lower learning rate is recommended so that learning will be steady and the error rate gradually decreases. The general setting is around 0.05
In standard back propagation algorithm, too low a learning rate makes the network learn very slowly. A high a learning rate makes the weights and objective function diverge, so there is no learning at all.
Regarding upper bound i think one can make a trial and error process by varying the layers of the network.
It is already known that the high learning rate, the solution space diverges according to the given activation function. If we modify the activation function, the high learning rate can be achieved. In some cases multi-level activation functions introduced to get high learning rate. So if we consider a new activation function/hybrid activation function rather than the usual one, I think high learning will be possible. In this case simulation experiment is required.