The Random Projection can preserve the distances, when data are projected into a Random Projection space, can we expect this transformation to improve the classification accuracy? When? How?
Random projection is really no different from other linear dimensionality reduction techniques except, of course, it is random. That means that strict guarantees have to be replaced by probabilistic guarantees.
The mechanics of when RP will help are essentially identical cases where other linear reductions would help. Any time you are using a learning method which deals poorly with dimensionality, RP will be a good candidate.
It will, however, make your sparse problems dense. That can make things much worse if the pattern of sparsity is an important signal for your learning algorithm.
There are interesting connections between the effectiveness of learning after projection based on the complexity of learning and rotational invariance. Random projection gives you rotational invariance which means that L1 regularization no longer applies. The dimensionality can be reduced to roughly K log d but the learning complexity is now linear in the number of dimensions and thus logarithmic in the original number of dimensions. In the original space, the learning complexity is logarithmic in the number of dimensions, but the number of dimensions is much larger so the effect is about the same.
If you haven't seen the blog http://nuit-blanche.blogspot.com/ , take a look. Lots of interesting topics related to random projections (RP).
I have limited knowledge of RP. With that said, I doubt dimension reduction makes classification MORE accurate (it is after all, a lower dimensional approximation). But, in general, it is MUCH easier to work with lower dimension data.