08 August 2017 0 4K Report

Hi, all, I have a question need your help.

Suppose I use white noise as input signal 'u', and then get Rx 'y'. Frequency response of channel is a bandpass filter with BW from 30Hz to 30KHz.  In time domain, because y= u*h  (* is convolution).  Finally I can use least squares estimation to get h from u and y. And then get frequency response of H use fft.

Then I use a bandstop filter with BW from 1KHz to 10KHz to filter white noise signal and send it into channel. It means now I use the band-stop filtered white noise signal instead of the original white noise input.

My question is: using least squares estimation to get h again, how does channel frequency response change now?   unchanged? smaller attenuation? larger?

Thanks!

Similar questions and discussions