I carried out an analysis to determine superiority of predictive potential of two methods (A and B) to cardio-metabolic risks. Method 'A' gave coefficient of multiple regression of 0.427 and p-value
Can you give more details about the regressions that you have run? Other than people saying "don't use the p values" people will need more information to give useful responses.
Daniel is correct, need more information. p values are used to test the null hypothesis (no good regression) here you fail to accept this and accept alternate (there is good regression) for both (at 5% and 1%) so assume that is why other advisers say ignore this as decision for which to go for?
Prior to MR did yo undertake Cronbach's alpha (or some other reliability check) for both?
The R-squared should be adjusted R-squared for MR explanatory observation. Also need to know the number of coefficients for each as well as values / sign of the constants and same for coefficients for each. Also need VIF values for coefficients and test of homoscedasticity for each and also test of normality for each.
knowing these parameters/values will allow for more informed decision
Adding to what Daniel and Robert have suggested, the Lin's CCC could be used to assess whether (and to what extent) the two methods provide similar predictions.
p value and co efficient are two different things, p value indicates significance of the findings and by value of coefficient is used as a measure of effect sizes in your case is medium effect size ! reporting of just p-value is not enough so effect size is beyond p value which suggest the magnitude of change by the experiment search in google for article on p value is not enough !
You could consider comparing your three models (risk ~ method A, risk ~ method B, risk ~ method A + B) through AIC (or AICc, depending on your sample size). First you could simply compare the AICs for all three models and calculate your delta. Typically we say that a model X fits the data on par with the top model (delta of 0) if it is within something like 2 deltas. But you could also statistically test between the full model (risk~method A + method B) and each reduced model (A or B) through a likelihood ratio test. Here the test statistic (LRT) and the p value will inform you as to if the full model significantly explains more variation than the reduced. Together these may help you select the best model beyond relying simply on comparing the model coefficients. I should also add that in the case where the three models all perform well, you can use model averaging, based on the AIC weights, to give a weighted average estimate of the model coefficients. This may be the best option (depending on your deltas in the AIC table) to infer the overall effect sizes, which is typically what we care most about.
Victor, They looks similar to me. p-value of no much use here, since they just tell you that you can reject null hypothesis for each of your tests but not to compare them (i,e that they are better than randomness) . Also, look at the 95% confidence intervals of your coefficients. If they are overlapping- there most likely no difference, or your sample size is small to detect even small existing difference. In any case, the difference between your coefficients is hardly visible. Of course this is a jumper conclusion, based on your result, without going through nuts and bolts of each of the elements of your study. Best wishes, NM
Hi Victor, I totally agree with Seán R. Millar. Regression coefficient and P-value indicate relationships and strengths of association. ROC analyses or something like that would be more relevant for your research question.
Victor, I'd just like to support what Seán wrote. The p values are more or less worthless in this context - they just tell you about the (negligible) chance that the results of your multiple regression analysis occurred by chance. The effect sizes tell you to which extent you can explain the variation of your outcome (18.2% and 17.2%, resp.). However, a ROC analysis will tell you about the likelihood ratios that will tell you more intuitively about the strength of your models. If I would have to decide which model to use, I would rely on these (see e.g. http://www.medcalc.org/manual/roc-curves.php). Besides this, I fully agree with Daniel using AIC as a quality criterion in order to decide which (less informative) parameters to remove (stepwise-backward) from your model (Occam's razor). Have success!
As stated at the first answer, no useful affirmation can be done without more information on the type (scale of measurement) of dependent ("preditive method") and independent (cardio risk) variable: are they nominal (categorical), ordinal (rank), continuous and w/normal distribuition, or continuous and asymmetrical? Linear regression cannot handle properly every kind of data, and their stats - R, R2, and p-value - may be worthless.
Victor, as far as I know, the two values (magnitude and probability for significance) have different uses and it will be better to look into the contribution or role of each of them and the combination of their contributions.
Victor, your two methods (A and B) gave similar results: both are around 4.2, which indicates a moderate predictive value, and their p-values are identical. You probably used only a slightly different model in model B compared to Model A, and you got very similar results. They are so similar that I would say that one is not really better than the other. This is common in building regression models, and you may prefer one model over the other simply because it seems to make more sense intuitively, or you may present the results of both models.
I do not disagree with the others' comments about using ROC analysis. But with ROC you have to have an actual diagnosis to compare your prediction with, or a "gold standard" test to compare it to. I assume you have the actual cardiovascular outcomes on these patients, so yes, you could use ROC analysis. You could also use survival analysis (logistic regression or Cox proportional hazards regression) to determine the impact of each risk factor. This would give odds ratios for each risk factor based on the entire group of patients. You can then turn that around and do predictive modeling based on the risk profile for each individual patient, using logistic regression, and come up with a risk of the outcome for each individual patient and put that into an ROC analysis or just do a sensitivity and specificity based on these predictions. It depends on what you want to do: describe the magnitude and precision of the risk factors; or compute the risk to an individual patient based on the risk factors they have; or describe the accuracy and predictive power of your model.