In principle you can just take the support vectors from your data set and retain the same classifier. However (unfortunately) you don't know which examples will end up as
support vectors before you actually train the SVM...
I have heard the strange comment once, meant as a criticism, that the SVM "wastes" all the data that do not end up as support vectors... That's of course nonsense, because it is precisely the selection of the support vectors from all available data that makes the SVM work.
In principle you can just take the support vectors from your data set and retain the same classifier. However (unfortunately) you don't know which examples will end up as
support vectors before you actually train the SVM...
I have heard the strange comment once, meant as a criticism, that the SVM "wastes" all the data that do not end up as support vectors... That's of course nonsense, because it is precisely the selection of the support vectors from all available data that makes the SVM work.
After learning, the training data can be removed. But the data may be required for retraining. Assume an SVM is trained from training set S0, providing support vectors v0,..,vn. If a new training dataset S1 (from the same source) becomes available, the support vectors for the training set S0 U S1 cannot be computed using v0, ..., vn and S1. The whole training set S U S1 is required.
Sajjad: From what I know, after the Learning stage, one can scrap the input and continue with just the concluding results. The whole point in Learning, be it SVM ot other method, is to present the data in a compact way and move on to deploy the conclusions.
Jesús: if a retraining is required, or an update, I'd use a new data set for several reasons, such as: avoid over-fitting, get a clearer view of the "delta" change, see what an impact has a particular parameter, etc. Note, that my opinion is based on working with a different tool, not SVM.