I have run an ITS on a relatively short time series [with sparse data] using a binomial logit regression. The outcome summary is as below:
Call: glm(formula = `Subject Refused Ratio` ~ Quarter + int2 + time_since_intervention2, family = binomial(link = "logit"), data = df) Deviance Residuals: Min 1Q Median 3Q Max -0.31273 -0.12677 -0.00424 0.12379 0.36520 Coefficients: Estimate Std. Error z value Pr(>|z|) (Intercept) -0.086241 1.050010 -0.082 0.935 Quarter -0.001962 0.108611 -0.018 0.986 int2 0.320026 1.832609 0.175 0.861 time_since_intervention2 -0.016696 0.327789 -0.051 0.959 (Dispersion parameter for binomial family taken to be 1) Null deviance: 0.72257 on 23 degrees of freedom Residual deviance: 0.65340 on 20 degrees of freedom AIC: 41.065 Number of Fisher Scoring iterations: 3
Relevant level [int2] and trend [time_since_intervention_2] changes as expressed in the summary are 0.32 and -0.02 (which can be exponentiated) and reflect a large change in the post-intervention period. However in each instance the Pr(>|z|) values exceed statistical significance values. Does anyone interpret such differently/Am I being an idiot?
Or can a single P value be derived.....
I have tried a variety of different model types [including ARMA and Fourier terms], but models never show a 'statistically signficant' result. Does anyone have any advice on how to use/derive p-values for ITS [if I'm doing it incorrectly] or have any advice in regards to how to frame this issue? Any help hugely appreciated.