The answer depends on what criterion or criteria you're trying to maximize or minimize via the model-building process. Is it squared error? Classification accuracy? Model simplicity? AIC or BIC? Stability of parameter estimates over repeated samples? Computational time needed to arrive at a model? Something else?
You'll need to decide on what evidence would persuade you that a model was functioning suitably, and use that/those indicators to make your judgment. Chances are, you'd need to run many iterations of the competing methods on new, simulated, or resampled subsets of your data in order to have any confidence in the outcomes.
Here's two examples of how research teams conducted their comparisons: Conference Paper Comparison of data mining techniques and tools for data classification
Article Comparing data mining methods on the VAERS database
I just need to compare the performance, I have regression models to predict the number of years, my result has RMSE, R2, and I need to compare which one predict better, the reviewers of the journals reject comparing RMSE or R2. They want different ways for comparing, I have read a lot of articles but no one has different way.
The regression problems dealing with class or categorical variables may need to have model metrics like sensitivity and specificity. The use of a confusion matrix gives u the model precision metrics--and a good example is when one is performing logistic regression predictions of a variable like groundwater salinity or a phenomenon like landslide susceptibility, given an array of predictors in a study.
For the conventional /Bayesian regression problems that seek to do predictions of continuous variable (i.e. linear regression problems) model metrics like precision and the correlation coefficient may be useful in determining whether your model is reliable.
One may make similar predictions using simpler algorithm like the k-Nearest neighbor, and compare the results of the summary outputs generated in the statistics involved when one uses the conventional (or even Bayesian regression)
Overall, the kind of data one has and the size of the data will greatly influence the regression outcomes during Multiple or just the simple linear regression predictions.
The other key factor in classification regression metrics output is the size of datasets and class imbalance-the training and testing subsets determine greatly the model metrics and accuracy. When one has datasets with one class of data dominating the other(s), the model may need fine tuning before being analyzed.