In general the dataset used for classification using Lilinear classifier is scaled in the range 0 to 1. now if the scaling range of 0.1 to 1 is feeded to the same classifier, would it effect the result?
If I remember well, the scale of the features should be in the same order of magnitude, so that small scale range difference should not have any effect.
Of course you won't have the same classes. Weights, centers of inertia, equation of regression etc.. will be different but the classification in terms of data belonging to one or another class will be the same. The most important is to apply the same normalization to the future data (test/validation dataset) you will need t classify
In theory it would not matter which normalization scale you choose for the data set. It only affects the visual separation. I would probably speculate that large margin classifiers like SVM could handle a collection of data with a mixture of both 0-1 and 0.1-1 normalization.
Researchers usually opt for either normalization or standardization of the data. However, both have their own weaknesses. If there are outliers, most of the data will be forced to scale down using normalization; on the other hand, standardization degrades data and does not provide bounded data...
Therefore, I invented a new distance metric which is invariant to the data scale, because the distance for each feature is in the range of [0,1[. for more information check this paper: https://www.researchgate.net/publication/264995324_Dimensionality_Invariant_Similarity_Measure