I would say that the SVM optimization scales only linear with the dimension, as the kernel computation is the only factor, where the dimension itself shows up. All other steps can be done based on the kernels only. From my naive point of view, I do not see any reason, why the dimensionality otherwise affects the run time of the algorithm. Fast implementations (Liblinear etc.) might work differently, however.
As Michael mention, the cost of computing kernel is linear in terms of number of dimensions i.e. O(d). Thus for N samples the cost of computing the kernel matrix(linear, quadratic, etc) is O(dN^2), more precisely it is equal to (dN^2)/2. In the other steps of SVM e.g. model fitting, there is no dependency on d. For more details see Andrew Moore's slides at http://www.autonlab.org/tutorials/svm15.pdf (particularly slide 27 for a tabular summary).