The first model has 2 predictor variables and I want to know if the predictive power of the model increases with the addition of the 3rd predictor variable?
you can always look at the R^2 and CV-RMSE of the model with 2 variables vs the model with 3 variables. If the R^2 gets bigger and CV-RMSE gets smaller, then that would indicate the model is better. It doesn't answer the question of how much better, that would depend on the uncertainty of your variables, the size of the coefficients, standard error, etc.
You might try a graphical approach. In fact, I wrote the following method paper specifically because I have seen that so many people on ResearchGate have had similar questions regarding or related to comparing alternative models. Graphics can be very informative. Figures 5 and 6 could each be the result of comparing various pairs of models, not just the models used when I constructed them:
As a statistician, I see the value in statistics, but often graphics are more useful and less subject to misinterpretation. (I have a question somewhat related to that here: https://www.researchgate.net/post/How_often_does_a_statistical_test_have_much_value)
There have been a number of questions similar to yours where, among other things, I have suggested that the questioner research "model validation," and/or "model selection," but I really think graphics are far underrated, and sometimes more rigorous but harder to interpret statistics may not be of as much practical value.
Something for you to further note: If you leave out a necessary regressor (i.e., independent variable/predictor) you may have "omitted variable bias." If you use more regressors than you need, you could inflate variance substantially, due to collinearity. There are ways to deal with that latter problem (ridge regression at the expense of bias; principle components at the expense of interpretation; etc), but I think it often best just to use the regressors that subject matter theory, and perhaps residual graphics, both seem to support. (There could be various ways regressors may interact, but if you are only looking at the 'predictive power,' that simplifies the question.)
If you look at the estimated variances of the prediction errors, (for example, the square root of that in SAS PROC REG is STDI), that may often come close to a good and interpretable measure of uncertainty, as they are designed to estimate variance, but because of the way sigma is estimated, bias also has an impact on that measure.
Not knowing the exact nature of your study problem, and data, let me note that you have to decide what applies.
It is to be expected that addition of an extra independent variable to a model will raise the predictive power of the model. Like Fabrice suggested, application of an information criterion like Akaike Infomation Criterion (AIC), Bayesian Information Criterion (BIC), Schwarz Information Criterion (SIC), Hannan Quinn Criterion (HQ), Finite Prediction Error (FPE), etc. will introduce an over-parametrization penalty to allow for parametric parsimony.
More is not always better. (Note "Such models may generalize poorly," stated in the abstract to "Sensitivity and Specificity of Information Criteria," June 27, 2012, Dziak, Coffman, Lanza, and Li,
The Pennsylvania State University, https://methodology.psu.edu/media/techreports/12-119.pdf.)
You might want to research "bias-variance tradeoff."