In Analog-to-Digital Converters(ADC), the signal is first sampled at a rate higher than or equal to the Nyquist, then quantized and encoded. In Analog-to-Informations (AIC), the sampling and compression are combined in one step, mathematically:
y=Φ⋅x
where y is compressed signal, Φ is the sensing matrix, and x is the input signal. My questions are:
* Simulating the above equation using MATLAB is downsampling, First, the input signal is read to obtain a vector of say N samples representing the signal and then multiplied by the matrix Φ.
Does this mean that we sample the signal to obtain the vector N? If so, where is the benefit? How sampling and compression are combined in one step? How can the sampling and compression be combined to perform compressive sensing in a practical way?