It depends if you talk about the linearly separable or non-linearly separable case. In the former, the weight vector can be explicitly retrieved and represents the separating hyper-plane between the two classes. The weight associated to each input dimension (predictor) gives information about its relevance for the discrimination of the two classes. In the non-linear case, the hyper-plane is only implicitly defined in a higher dimensional dot-product space by means of the "kernel trick" mapping (e.g. Gaussian kernel replacing the dot product). When using non-linear kernels more sophisticated feature selection techniques are needed for the analysis of the relevance of input predictors.
It depends if you talk about the linearly separable or non-linearly separable case. In the former, the weight vector can be explicitly retrieved and represents the separating hyper-plane between the two classes. The weight associated to each input dimension (predictor) gives information about its relevance for the discrimination of the two classes. In the non-linear case, the hyper-plane is only implicitly defined in a higher dimensional dot-product space by means of the "kernel trick" mapping (e.g. Gaussian kernel replacing the dot product). When using non-linear kernels more sophisticated feature selection techniques are needed for the analysis of the relevance of input predictors.
Your question is not entirely clear. I'll assume that you are referring to dual weights (typically denoted by the vector alpha).
All predictions for SVM models -- and more generally models resulting from kernel methods -- can be expressed as a linear combination of kernel evaluations between (some) training instances (the support vectors) and the test instance. This follows from the so-called representer theorem (cfr. the link). The coefficients in this linear combination are the dual weights (alpha's) multiplied by the label corresponding to each training instance (y's).
How to compute the weight vector w and bias b in linear SVM. We have a hyperplane equation and the positive and negative feature. In equation Wx+b= 0, what does it mean by weight vector and how to compute it?? Can anybody explain it please
Weights associated with variables in Support Vector regression problem does not tell us the impact of a particular variable on dependent variable as like in linear regression?
what does the weights in Support vector regression tells us in leyman terms and in technical terms.