Can we affirm that whenever one has a prediction algorithm, one can also get a correspondingly good compression algorithm for data one already have, and vice versa?
There is some correlation between compression and perdition. Prediction is a tool of compression. Assume you have data and you you have redundancy in it you can predict the redundancy from the context of the signal and remove the redundancy by simply subtracting the the predicted signal from the real signal.
The difference will be the compressed signal.
The prediction is a powerful concept to reduce the redundancy in the signals and consequently compress it.
prediction is used intensively in video codecs and other signal codecs.
Generally, the term prediction and compression are used in different senses; I am not sure that, in what sense you want to interlink these two 'prediction and compression' algorithms. Please elaborate on your objective more clearly.
If using the compressed data for the prediction purpose then the lossless compression algorithm can not affect the results of your prediction algorithm.
Gonçalo Peres 龚燿禄 ,It is as Yashwant Kurmi said. I also could not relate the prediction to compression.
It might not be what you asked but you can use autoencoders.
If you want to make predictions on data you can use autoencoders for compression them in the latent space. This learned data pattern can then be used to generate the input on which you can make predictions.
Generally speaking, compression means "reducing the size of data by exploiting redundancy in it," while prediction means "forecasting future values based on available or existing experience/data". You may have asked this question relating to some field or some particular nitch it would be helpful to add that context to answer it properly.
There is some correlation between compression and perdition. Prediction is a tool of compression. Assume you have data and you you have redundancy in it you can predict the redundancy from the context of the signal and remove the redundancy by simply subtracting the the predicted signal from the real signal.
The difference will be the compressed signal.
The prediction is a powerful concept to reduce the redundancy in the signals and consequently compress it.
prediction is used intensively in video codecs and other signal codecs.
I agree to Abdelhalim abdelnaby Zekry , in this sense. Algorithms in linear predictive coding (LPC) such as Levison Durbin as widely used in compression. So, I stand corrected for my previous comment.
They are definitely not te same thing. However, there is a big correlation between both. Shannon's source coding theorem stablishes that if you have a sequence of i.i.d random variables, you cannot compress them with an average number of bits per symbol less than the Shannon's entropy. We also know codes that get arbitrarily close to the Shannon's entropy. So back to your question, if you can perfectly predict your data sequence, it's because you have a probabilistic model of your data that fully determines the value of the next symbol given the value of the previous symbols. If you take a close look to Shannon's entropy you will see that if the probability of each symbol given the past is 1 then the entropy is 0. Generally speaking, you usually don't encounter data that is perfectly predictble, however, it's really common to build a compression algorithm around a probabilistic model that predicts the next value to be encoded given the past, and encoding the difference between the predicted and the actual value. An example of this is the JPEG algorithm. Hope that helped you.