I am traying to simulate a frequency selective fading channel in Matlab and I have faced with some issue/questions during the implementation.

It is a DSSS (BPSK) modulation and the code is organized in packets (5000 packets) of 2550 symbols.

The channel that i want to simulate has the next parameters:

SamplingRate=50e-9; t_coh=30e-6; maxDopplerShift = 1/(t_coh); AvGain=[-3 0 -15 -7 -10 -15 -50]; PathDelay=[0 50 100 150 200 250 300]*1e-9;

My first approach was using the rayleighchan function.

chan=rayleighchan(SamplingRate,maxDopplerShift,PathDelay,AvGain);

I use filter(h,1,tx_modulated) in a for loop for each packet. Also, I reset the channel after every loop.

chan.reset; %For use different Rayleigh channel in each packet Paths = chan.PathGains;

Why reset? Because i assume that if i want to get the bit error rate simulating and "infinite" time, the channel time variance doesn't matter. I mean if i have a simulation of time varying channel during "infinite" time is the same as having the same channel resetting randomly.

My second approach is using Rayleigh channel object

chan = comm.RayleighChannel(...);

In this simulation, when I pass the data in the channel the taps gains changed so i assume that its like applying a time variant simulation. I see that the the impulse response taps and the frecuncy response changes slowly during all simulation. So, in my understanding depending on the initial channel configuration the BER can be worse or better that the first approach. I mean, the frequency response can be worse or better than the mean of the first approach. I assume that if i did and "infinite" simulation the result will be the same.

I don't know if my assumptions are correct, hope someone could help me.

Thanks,

More Mikel Badiola's questions See All
Similar questions and discussions