I've read the attached article on VISion super-resolution imaging and found it very interesting as it seem to offer a computationally simpler alternative to SOFI. The first step of the data processing is clear - one takes a temporal stack of images F(r,t) and calculates the variance G2(r,0) (so far it is exactly the same as 2nd order SOFI for 0 lag time). However, the next step is not clear to me: "When F(r,t) was substituted with G2(r,0) and the variance was again calculated, the PSF was again √2-fold." Since G2(r,0) is a single image, not a stack, how is the variance calculated in this case? Does the procedure involve splitting the original stack into sub-stacks, the variances of which are used in the next step? If so, can anyone advise me on the way in which the stack was split?
Thank you!
Article Real-Time Nanoscopy by Using Blinking Enhanced Quantum Dots