More detailed (you actually can search even in wikipedia) , the SVM takes points from a N dimension space and place them in a M dimension space,. By doing so the points from different classes become linearly separables by an hyperplane. The trick is having M>N even infinite.
My answer is "NO" because the kernel is a function that produce a similar result as other function that increase dimensions to make a linear separation.
Readong Bennamar answer, maybe is better if I offer more details. The kernel itself increase the dimensionality, in order to classify the input. As output you have only one (using generic SVM), either is from the desired class or not. So the kernel DOES increase the dimensionality, but what you get as result of the SVM is just one value.
Your question is one that almost everyone who begin with kernel methods has asked or at least some variant of.
The answer is YES in théorie, because that the general idea and advantage of Kernel use, is that if we can't find a solution in (N)-dimensions we try in (N+k)-Dimensions.
however in practice we use the "kernel trick" this aproach is about to not explicitly transforming the N-dimension to higher, but simply by using the kernel function to compute the inner products between the images of all pairs of data in the feature space (basing on Mercer theorie) no need to explicitly increase the dimension by explicitly transforming transforming into feature vector representations in higher space .
Yes the main idea of the kernel function is to map the data that are linearly inseparable in the observation space into a higher dimensional space where they are linearly separable
The kernel, if it is a Mercer Kernel, can be expressed in the way K(x,y) = phi(x) phi(y), and dimensión in the feature space (dimension of the phi(x) ) is given by the nonzero eigenvalues of the Mercer decomposition of the kernel. Some kernel induce finite dimensional transformations, such as polynomial kernels, and some others map the data to R^infinity, such as the exponential kernel. In practice, dimension will be finite, because the eigenvalues of the integral operator L_K associated to K are estimated using the Gram matrix for the sample at hand (that usually is finite). Thus, in general, kernels increase the dimensionality.
On the other hand, there is a case were dimensionality is decreased: One-Class SVM, where the data are projected onto 1 dimensional spaces.
I have just asked my teacher about this question :) . He said to me that It depends on the context. In general the Kernel increase the dimension but "implicitly". In other way It increase the dimension without increasing the running time (this is the benefit of the kernel use).