27 December 2022 0 3K Report

Hi!

Briefly, here is the issue I kindly would like to know your opinion on: I used Bootstrapping method for a score measurement (e.g., 1000 times) by generating 50 trials each time and calculating, e.g., a peak from them using a calculation rather than averaging. Now, following some reference literature, the standard error (SE) of the such bootstrapping process is the standard deviation error (SD) obtained, e.g., 1000 times the results instead of calculating SE by dividing SD by the number of samples square (SE=SD/root_square(N)).

The issue is that the obtained SE using the SD of bootstraps is worse than when calculating the scores, e.g., peak latency from the given individual trials, and then calculating SE by the above-mentioned formula.

Is there any problem with my method for calculating SE of bootstrapping results? Why is SE of the simple calculation of the trial scores is less than the bootstrapping method (i.e., that is not expected when using bootstrapping scores come from averagings)?

More Reza Mahini's questions See All
Similar questions and discussions