I'd like to compare more than two models (in my case the're six models, all binomial with two continuous predictors each), in order to assess which model provides the best fit. I suppose I have to use the Deviance Information Criterion (DIC) and I am using the rjags package. However, it seems that you cannot compare more than two models simultaneously in the function -- we have to compare the difference between two models and choose the one with best fit. I thought I could perform a pair-wise comparison (with only six models it would not be that time-spending), but I don't know if it really makes sense. I also thought I could simply rank all models by their DIC values and assume that the model with lower DIC is preferred, does that make sense?
Could anyone give some guidance on that? I would greatly appreciate any suggestions.