SVM typically suffer from drawbacks concerning the choice of the kernel, low speed, high algorithmic complexity, and intense memory requirement. What is the best alternative to the SVM, which mitigates these problems?
SVM training complexity is at least quadratic in the number of training instances, which becomes an issue for very large training sets.
If you want to stay in the realm of kernel methods, a reasonable approach to tackle this problem is by subsampling the training set and training a set of models which are then put together into an ensemble. This alleviates both time and memory complexity issues. This approach is taken in EnsembleSVM, a free C++ package .
With regards to kernel parameters: most learning algorithms have some hyperparameters that must be tuned, so this does not really favor any particular method. Efficient libraries exist for optimizing hyperparameters such as Optunity and HyperOpt.
SVM has been shown outperform most of the algorithms in number of benchmark tests and it is still remains to be one of the best learning machine with excellent generalization capability (convex optimization problem).
I don't have any learning method in mind that would be better compared to SVMs. Use linear kernels if the kernel matrix is the issue.
I'd try linear programming SVMs to avoid quadratic programming or Lagrangian SVMs for computational efficacy.
For large scale (big) data mining there are linear programming machines that are evolved from SVMs.
Extreme Learning Machine based single layer feed forward network has lower training time and more accurate classifier performance. There are a lots of implementation in java, python etc. It is a good alternative for SVM based training.