As Michael said, along with RMSE, Mean Absolute Percentage Error (MAPE), Mean Absolute Error (MAE) can also be used to check the performance of different methos
I think we're missing two very important parts of this question. First, if we're evaluating the accuracy of an algorithm, we need to be sure that the assumptions made about how the expected output should be modelled are satisfied. Concretely, even if accuracy is high (error is low), it doesn't mean that we have modelled the problem well if we assume that the solution should follow a unimodal distribution, when it in reality has multiple modes. A classic example of this is the inverse kinematics problem -- the arithmetic mean of several correct modes of output, is itself not guaranteed to be a good solution. So if you want to measure error, do a sanity check that your model is capable of modelling the expected distribution of output values. Second, accuracy is not the only measure we care about in a model, we also care that a model is exactly as complex as it need be, without going over (i.e. as simple as possible). For this, we can measure the amount of information content that each parameter in the model contributes -- for instance, using the Akaike Information Criterion.
Now if your assumptions are correct, and you only care about error on its own, I've noticed that no one has mentioned normalizing classifications (it looks like other answers only deal in real values) -- see Matthew's Correlation Coefficient / aka Pearson's Phi Coefficient -- for a nice equation for evaluating performance for a classifier.
Thanks for your comprehensive explanation on evaluating two different techniques. I haven't heard some of the techniques that you mentioned before this for instance Akaike Information Criterion as well as Matthew's Correlation Coefficient.
However, the good thing is, I found the use of Akaike Information Criterion when I want to compare the two models in the OriginLab software that I use now.