09 September 2014 1 493 Report

I generated a corrected cross correlogram. To do so, I got a 1ms bin size raw cross-correlogram and subtracted an averaged jittered cross correlation from it. This average was generated from a pool of 1000 signals, where the every spike was jittered randomly 50 ms around its original position in both spike trains.

Than in every time bin of the corrected cross correlogram I asked the probability of that value happening by chance. I assume that the value in that bin has a poisson distribution with the mean equals the value in the same bin in the jittered cross-correlogram. 

If I accept the level of significance of 0.01, in a 4 seconds window (4000 milliseconds) I should get by chance 40 data points as significantly different just by chance.

Here are my questions:

1- P of 0.01 sounds a little bit weak as criteria of significance, given that in 4000 bins I should expect 40 to be significantly different by chance. I read that I should look for sequences of significantly different bins. My question is: What features must this sequence have? Length? Frequency?

2- What is a good policy for picking bin sizes, and bin sizes for shuffling?

3- Is there a good metric to distinguishing common input and synaptic coupling?

4- I did not use Shift predictors (shifting the whole spike train by one trial) because it assumes that the spike train is stationary across trials. But do you think I should try it anyway? What information could it give me that the jitter shift predictor cannot give me?

Similar questions and discussions