My experience with N-mixture models for estimating abundance from the counts of unmarked individuals is that they can be highly sensitive to extra-binomial variation in the replicate counts. The issue is that the model relies on the binomial sampling assumption to tease apart N from p. When N is infinitely small and p is near one, the variance in the counts will approach zero...but when N increases and p decreases toward zero, the variance among the counts approaches the mean (i.e. Np). Thus when extra-binomial variation in the counts is present, the model will produce positively biased estimates of N and negatively biased estimates of p, and the mixing of the Bayesian posterior distributions can become poor(might not be a problem when using likelihood approaches such as UnMarked). I did however find that the covariate estimates of N and p remained fairly unbiased.
The paper recommended by Andrew is a good one, and I would also recommend the following paper.
Article On the reliability of N-mixture models for count data
I agree with Matt's recomendation that you perform a simulation study to determine model performance under different levels of extra-binomial variation that are relevant you your particular context. This could reveal the inference consequences to this assumption violation for your context. An example is in the following paper.
Article Assessing a Threatened Fish Species under Budgetary Constrai...
If the counts are truly negative binomial but you model them as Poisson (hence, data are overdispersed) then there isn't much of a problem. I used simulations to assess this and was surprised by how robust the Poisson N-mixture model is. However, I haven't checked for problems in more complex N-mixtures that use covariates, and this may very well be where problems arise. See Kery and Royle's excellent book on "Applied Hierarchical Modeling in Ecology." Even though I don't think you'll find a direct answer to your question, they have lots of great advice on how to diagnose goodness of fit of N-mixtures. I would recommend that you simulate data under different distributional assumptions and then check how well your particular model can recover the (known) parameters.
Underestimation of bias if there is overdispersion in abundance relative to the fitted model and overestimation if there is overdispersion in detection...
See:
Article Sensitivity of binomial N‐mixture models to overdispersion: ...
My experience with N-mixture models for estimating abundance from the counts of unmarked individuals is that they can be highly sensitive to extra-binomial variation in the replicate counts. The issue is that the model relies on the binomial sampling assumption to tease apart N from p. When N is infinitely small and p is near one, the variance in the counts will approach zero...but when N increases and p decreases toward zero, the variance among the counts approaches the mean (i.e. Np). Thus when extra-binomial variation in the counts is present, the model will produce positively biased estimates of N and negatively biased estimates of p, and the mixing of the Bayesian posterior distributions can become poor(might not be a problem when using likelihood approaches such as UnMarked). I did however find that the covariate estimates of N and p remained fairly unbiased.
The paper recommended by Andrew is a good one, and I would also recommend the following paper.
Article On the reliability of N-mixture models for count data
I agree with Matt's recomendation that you perform a simulation study to determine model performance under different levels of extra-binomial variation that are relevant you your particular context. This could reveal the inference consequences to this assumption violation for your context. An example is in the following paper.
Article Assessing a Threatened Fish Species under Budgetary Constrai...