It sounds as though you want to apply an inverse fast Fourier transform (iFFT) to your power spectral density PSD function P(f), in this way transforming from the PSD in frequency domain to time series in time domain, where that time series is random.
To this end, I assume:
(1) Your given PDS function is sampled at M positive frequencies f(k) = k X Df, k = 1,2,3…M, where Df is the sample spacing of the PSD function in the frequency domain, and M is the number of samples in the PSD;
(2) The number of samples in your given PDS is M = N/2 + 1 where N is the number of samples in the fast Fourier transform (FFT), N = 256, or 1024, or 2048, … or any other integer power of 2, as usual for the FFT; and
(3) The maximum frequency in your given PSD is Fmax.
If (1) and (2) are not true, then you must re-sample your PSD such that (1) and (2) are true. If (1) and (2) are true, then you are almost ready to apply an inverse FFT (iFFT) algorithm, which you presumably already have on hand ready to use, in order to generate a corresponding time series.
All you have to do BEFORE applying the inverse FFT (iFFT) is:
(4) Convert power PSD(f(k)) to amplitude A(f(k)), A(f(k)) = SQRT[2 X PSD(f(k))];
(5) Give each spectral component a random phase, PHI(f(k)) = random number, uniformly distributed between 0 and 360 degrees (or equivalently, between 0 and 2Pi radians);
(6) Construct a frequency domain signal Z(f(k)) = A(f(k)) X e^(i X PHI(f(k))), where i is the imaginary number, i = SQRT(-1)
Use a uniform random number generator to generate the random phases.
In this way, you have gone from PSD(f(k)) to a corresponding signal in frequency domain Z(f(k)), where Z(f(k)) is ready for input into the inverse FFT (iFFT) algorithm for positive valued (one-sided) functions in frequency f(k) > 0.
Now, finally, apply the inverse FFT (iFFT) to Z(f(k)).
The output of the iFFT will be a random time series on the finite (bounded, limited) time interval t = 0 to tmax = (N-1) X Dt, where Dt = 1 / (2 X Fmax).
The PSD of this time series will be the same as your given PSD. You should check your result, of course, by taking the FFT of the resulting random time series in order to confirm that it is giving the same PSD as you given PSD.
Note that the time series will repeat itself (no longer random), at times greater than tmax, and at times less than 0. So you cannot generate ever longer time series this way unless you increase N in the sampling of your PSD and in the inverse FFT accordingly.
You can always generate a new random time series by generating a new set of random phases PHI(f(k)) = random number, uniformly distributed between 0 and 360 degrees (or equivalently, between 0 and 2Pi radians), and then applying the iFFT once again. This assumes that the seed of the random number generator is different each time you undertake to generate a new random time series from the given PSD using the steps above.
It sounds as though you want to apply an inverse fast Fourier transform (iFFT) to your power spectral density PSD function P(f), in this way transforming from the PSD in frequency domain to time series in time domain, where that time series is random.
To this end, I assume:
(1) Your given PDS function is sampled at M positive frequencies f(k) = k X Df, k = 1,2,3…M, where Df is the sample spacing of the PSD function in the frequency domain, and M is the number of samples in the PSD;
(2) The number of samples in your given PDS is M = N/2 + 1 where N is the number of samples in the fast Fourier transform (FFT), N = 256, or 1024, or 2048, … or any other integer power of 2, as usual for the FFT; and
(3) The maximum frequency in your given PSD is Fmax.
If (1) and (2) are not true, then you must re-sample your PSD such that (1) and (2) are true. If (1) and (2) are true, then you are almost ready to apply an inverse FFT (iFFT) algorithm, which you presumably already have on hand ready to use, in order to generate a corresponding time series.
All you have to do BEFORE applying the inverse FFT (iFFT) is:
(4) Convert power PSD(f(k)) to amplitude A(f(k)), A(f(k)) = SQRT[2 X PSD(f(k))];
(5) Give each spectral component a random phase, PHI(f(k)) = random number, uniformly distributed between 0 and 360 degrees (or equivalently, between 0 and 2Pi radians);
(6) Construct a frequency domain signal Z(f(k)) = A(f(k)) X e^(i X PHI(f(k))), where i is the imaginary number, i = SQRT(-1)
Use a uniform random number generator to generate the random phases.
In this way, you have gone from PSD(f(k)) to a corresponding signal in frequency domain Z(f(k)), where Z(f(k)) is ready for input into the inverse FFT (iFFT) algorithm for positive valued (one-sided) functions in frequency f(k) > 0.
Now, finally, apply the inverse FFT (iFFT) to Z(f(k)).
The output of the iFFT will be a random time series on the finite (bounded, limited) time interval t = 0 to tmax = (N-1) X Dt, where Dt = 1 / (2 X Fmax).
The PSD of this time series will be the same as your given PSD. You should check your result, of course, by taking the FFT of the resulting random time series in order to confirm that it is giving the same PSD as you given PSD.
Note that the time series will repeat itself (no longer random), at times greater than tmax, and at times less than 0. So you cannot generate ever longer time series this way unless you increase N in the sampling of your PSD and in the inverse FFT accordingly.
You can always generate a new random time series by generating a new set of random phases PHI(f(k)) = random number, uniformly distributed between 0 and 360 degrees (or equivalently, between 0 and 2Pi radians), and then applying the iFFT once again. This assumes that the seed of the random number generator is different each time you undertake to generate a new random time series from the given PSD using the steps above.
Roland answer is an excellent one. I believe it can be more accurate with few slight modifications.
First we should return to the PSD definition. power spectral density is a representation of statistical power distribution across frequency range of a specific signal. Not an actual one to one representation of the signal in frequency domain as in FFT.
Although the PSD provides some useful information (namely the average power at a specific frequency) still it lacks two important characteristics.
1. The statistical distribution of the signal power (not just the mean)
Actually it's exactly the variance of the signal amplitude at each specific frequency
2. The correlation between different frequencies (if any)
3. The phase data for each frequency.
If we have any of these information we better try to incorporate it the modeled signal. Otherwise we better assume it to have a totally random values, as ronald was assuming with the phase.
What can be added to Ronald model is to have each bin amplitude as a random value as well with a variance equal to the value of the PSD bin at these frequency.
To do this you can go through two different methods.
The first one is just an extension to Ronald method.
1. get a random number generator to generate the random phases for each bin.
2. Get another random generator to generate the amplitude of the bins
3. Multiply the generated amplitude with the root square of the PSD amplitude (variance to standard variation)
4.perform ifft on the genrated sequence to get its time domain counterpart.
5. Repeat 1->4 multiple times concatenate each time domain sequence with the preceding one to get real random signal with the required PSD
Another method which I prefer as it requires less computational power (assuming you have dedicated FFT/IFFT optimized functions)
1. take the square root of the PSD.
2. Generate random numbers with a variance of 1 with the length of the PSD pins
3. Take the FFT of this random sequence.
4. multiply this sequence with the square root of the PSD
5 take the ifft of the product
6. repeat 2->5 and concatenate the newer sequences.
Having a real random generator for the raw time domain signal provides a white noise with a variance equal to 1 (distributed equally across the whole frequency range, you might need to scale for that later) with random phases between +/- pi.
So we don't need two random generators for the amplitude and phase.
I just noticed a missing note in my answer. Since I'm just multiplying by the sqrt of the PSD amplitude the result in time domain will be a complex signal.
To enforce the time domain to be signal we need to have a DSB PSD signal and forcing a symmetry between the USB and the LSB before multiplying its square root by the fft of the white noise.
If you want to go with Ronald answer you should as well insure a DSB representation and enforce the LSB to be the complex conjugate of the USB (as he is adding phase information to the PSD)
Hello. In addition to the well thought out answers already given, I would add a couple of points. First, there is no unique random signal for a given PSD. Second, the PSD of measured data is an estimate. Without constraining the problem, you will have an infinite number of possible time series with the same PSD even in the absence of noise. With noise, it's worse than this since the PSD estimate may be a poor approximation of the actual PSD of the system. Many PSDs can be generated by putting white noise into a linear system. If you fit the PSD of a linear dynamical model (auto-regressive moving average) to your PSD you will get a representation for the dynamical system you have (this is related to Burg's method). You can then just use white noise as an input and adjust the variance to conserve total signal power. There is a good review of this type of stuff in the IEEE by Childers (if memory serves).
I have studied your answers. Thanks for your thoughtful response. I am in the process of applying your suggestion to my work and will update in due time about the results. Thanks to Kessel, Mina and Pulford.
It depends on the type of the signal. For complex signals (like IQ data for digital TRXs, base band representation of oscillator Phase noise, ..) You have to go with the definition of the frequency range as -FS/2 --> FS/2 (where FS is the sampling frequency and the number of samples "frequency resolution" will depend on the time-domain sequence length). And the PSD is not required to by symmetric between +ve and -ve frequencies.
On the other hand for real signals then you have one of two opetions. used again the -FS/2 to FS/2 and enforce that the -ve frequency component are just the complex conjugate of the +ve frequency components. Or just go from 0 to FS/2 (i.e. +ve frequency components and DC).
which is better will depend on what IFFT engine you have at hand. I use matlab so typically I've +/-ve frequency components but rotated to be from 0 to FS.
How to verify your results is by having some PSD algorithm in place. Typically again I use the built in PSD functions in Matlab. which is basically dividing the stream into multiple smaller parts each with the length of the FFT. Do the FFT for each of them and then doing some kind of an averaging between them. As for the scaling factors I always forget them so I always check the power in frequency domain and time domain and make sure that they are equal.
in the procedure described by Ronald the iFFT output values have imaginary part whereas time domain values are real usually. What should be done with imaginary part of iFFT output values?
Having real or complex time domain series is based on the symmetry in the frequency domain series "between positive and negative frequency".
For Ronald approach he is assuming one sided IFFT algorithm. (So basically assuming the FFT is done with the sin + cos coefficients to control the phase instead of exp(1i) )
If the IFFT algorithm you have is the general complex double sided one then you need an extra conditioning step.
After you create the positive frequency pins (from 0 to N/2) you calculate the negative pins as the complex conjugate (C of -M = conj(C of M).
We simulate now a response of our future Silicon Photomultiplier and now I need to prepare a noise (real) signal simulation, the frequency density function of this noise is supposed to be fixed later and its bandwidth is known now. I've got the public FFTW C++ library. Could you recommend me the way I could simulate a (real) time domain signals of this noise?
Hello Ronald and Mina, I appreciate the methodology the two of you described on this topic as it has aided me greatly. However I was searching for publications that somehow certified the effectiveness of the output signal and I didn't find any, besides classic signal processing texts in which the formulae are described.
Would you have any sources of that type that could validate the method?
I don't know of a published reference on this methodology.
But none is required. You can (and should) always validate the method yourself when you use it. That is: compute the PSD for a time series generated using the method. Does it give the original PSD? If so, then you have validated the method for your signals.
Keep in mind that the PSD for a random signal will also be somewhat random. So if you use the method to generate many time series, each generated using the same intended PSD, then the average of the set of validating PSDs for each time series should converge to the original PSD used to generate all of the time series in your validation test.
Do I need to resample it to be 129 (to satisfy N = 256 requirement)?
These days matlab can perform FFT without having N equals to the power of 2.
Another question is after constructing a frequency domain signal Z (even number=100) follow step 6 recommended by Ronald from steps (4) to (6) then how do we add the complex conjugate to the frequency domain signal. Can we simply add it as a ro w or column vector?
(7) New_Z=[Z; conj(Z)]
For Ronald approach, The output of the iFFT will be a random time series on the finite (bounded, limited) time interval t = 0 to tmax = (N-1) X Dt, where Dt = 1 / (2 X Fmax).
Did you balance the symmetry in the real and imaginary parts of the noise transform? The 0th Fourier coefficient needs to be real. For all others, using Python indexing convention, the real part of the nth coefficient equals the real part of the -nth. The imaginary part of the nth coefficient equals minus the imaginary part of the -nth coefficient.