Hello to all,

Let's consider that a signal is sampled at every dt, creating a discrete signal of N samples. The frequency step in the fft should be 1/dt/N. But I expect that by increasing the number of samples and consequently decreasing the dt, the frequency step will decrease, leading to a more accurate FFT. But I understand that the only way to decrease the frequency step df is by increasing the overall time of the signal (length of the window). Where do I do wrong with this?

More Dimitrios V. Peponis's questions See All
Similar questions and discussions