If you are looking at a multiple linear regression, you should use the adjusted R^2 value. If it is a simple linear regression, then use the R^2.
R^2 Adj penalizes you for including too many non-significant terms in your model.
As you increase the number of factors in your analysis, the R^2 value will go up. You can add random data to your regression, and the R^2 value will go up. If you do add random data, the R^2 adj will decrease.
I would be more interested in seeing how the R^2 adj and the R^2 pred compare to each other.
I had a negative R^2(adj) a few times. It tend to occur if I have lots of terms in the model that are randomly generated. Since I doubt you are analyzing randomly generated data, I would think it means you have a lot variables in your model that don't belong.
Right now I have an issue with R^2(adj) is over 60%, R^2(pred) is near 0.00%. My data was collected at different times and during one of the data gatherings, the data recorders were not working right. So, this data is much more random than I usually get. When I analyzed it by itself, I got R^2 near 40% and R^2(adj) near zero.